On April 11, Princeton mathematician and the inventor of “Game of Life” John Horton Conway passed away from the coronavirus. Known as a “magical genius” whose curiosity extended beyond just mathematics, the passing was a devastating blow to many who loved the man.
Yet as news of his passing broke, an interesting scenario developed. Instead of a formal statement from the institution or his family, the news first appeared on Twitter. With no verifiable proof of the claim, many were left struggling to determine whether to believe the story.
This scenario––a questionable story that can be proven true or false in time––presents a challenge for combating the spread of false information online. As we have seen many times before on social media, stories are often shared prior to the information being verified. Unfortunately, this will increasingly occur––especially in an election year and during a pandemic. Therefore, examining how social media responded during this particular event can help better determine the rules and patterns that drive the spread of information online.
Around 2:00 pm EST on Saturday, April 11, news started to spread on social media that John Horton Conway had died. The main source was a tweet that came from a fellow mathematician, who expressed his condolences and shared a story of Conway writing a blog post for April Fool’s Day.
As the news began to spread, most individuals who saw the tweets accepted the information as true and began expressing condolences themselves.
However some started to question the news; mainly because the original tweet had no source verifying the claim. As time went on, people began to speculate that this may indeed be a hoax, and many began deleting and retracting earlier tweets; a void existed where a source should be.
Users filled that void with Wikipedia, a platform where any individual can make changes to the information on any given page. However, this led to a series of citation conflicts, where users would post and then others would delete the post, claiming a lack of source.
The confusion eventually died down as more individuals who knew John Horton Conway explained what had happened, and how they knew. Indeed, the account that first broke the news followed up later with an explanation of what happened. But in that brief window where questions arose, we received a glimpse into how social media reacts to questionable news. And as if discovering the rules to a “Game of Misinformation,” this teaches us a few important lessons about user behavior and how misinformation spreads over time.
First, most users quickly trusted the initial reports as the information filtered in. This is to be expected: research has shown that individuals tend to trust those in their social networks. And indeed, the mathematician whose tweet was the primary source, while not the closest person to the deceased, was in the same community. In other words, what he said had weight. Further, by linking an article in Scientific American, users may have made a connection between the news and the article, even when the tweet did specify that was not the case.
Because of this level of trust within networks, individuals must carefully consider the content and the context by which they share information. Rushing to post breaking news can cause significant harms when that information is incorrect. At the same time, presentation can also have a drastic impact on how the reader digests the information. In this case, linking to the Scientific American story provided interesting context about the man behind the name, but also could give the reader the impression that the article supported the claim that he had died. That is not to say that any tweets in this situation were hasty or ill-conceived, but individuals must remain mindful of how the information shared online is presented and may be perceived by the audience.
Second, people do read comments and replies. The original tweet or social media post may receive the most attention, but many users will scroll through the comments, especially those who post the original material. This leads to two key conclusions. First, users should critically examine information and wait for additional verification before accepting assertions as truth. Second, when information seems incorrect, or at least unverified, users can and should engage with the content to point out the discrepancy. This can mean the difference between a false story spreading between 1,000 people or 1,000,000 people before the information is verified/disproven. Again, while this will not stop the spread of false information outright, it can lead to retractions and a general awareness from other users, which will “flatten the misinformation curve”, so to speak.
Finally, when a void of sources exists, individuals may try to use other mediums or hastily reported news to bolster their point of view. In this case, so-called “edit wars” developed on John Conway’s Wikipedia page, with some writing that he had died while others removed the information. While it is impossible to say whether the same individuals who edited the Wikipedia page also used it as evidence to support the original tweet, it does highlight how easy it could be to use a similar method in the future. Users often have to rely on the word of a small number of individuals in the hours following the release of a questionable story. When this is the case, some may try to leverage the implicit trust we have in other institutions to bolster their claims and arguments. In this case, it was Wikipedia, but it could be others. Users must carefully consider the possible biases or exploits that exist with specific sources.
Like Conway’s Game of Life, there are patterns to how information spreads online. Understanding these patterns and the rules by which false information changes and grows will be critical as we prepare for the next challenge. Sadly, the story that spread earlier this month turned out to be true, but the lessons we can learn from it can be applied to similar stories moving forward.
Jeffrey Westling is a technology and innovation policy fellow at the R Street Institute, a free-market think tank based in Washington, D.C.