With increased regulatory pressure surrounding the platform’s ability to help distribute disinformation (often to bloody and disastrous effect), Facebook owned Whatsapp this week announced it would be more tightly restricting how app messages can be forwarded. Under the new system, if a user receives a “highly forwarded” message – one which has been forwarded more than five times – that user will only be able to send it on to a single chat at a time. Previously, users could forward these messages on to five people at a time, a limit that was implemented last year.
It doesn’t block all message forwarding (you can still smash the forward button individually as many times as you like), but it does implement a little “friction” in a bid to slow mass forwarding in general. Over at the Whatsapp blog, the company explains its thinking:
“Is all forwarding bad? Certainly not. We know many users forward helpful information, as well as funny videos, memes, and reflections or prayers they find meaningful. In recent weeks, people have also used WhatsApp to organize public moments of support for frontline health workers. However, we’ve seen a significant increase in the amount of forwarding which users have told us can feel overwhelming and can contribute to the spread of misinformation. We believe it’s important to slow the spread of these messages down to keep WhatsApp a place for personal conversation.”
Last year, Whatsapp says it introduced double arrow labels to indicate that forwarded messages were not from a “close contact,” trying to make it clearer which messages were effectively from someone you trust, versus mass forwarded memes or spam. It’s not entirely clear yet how impactful this will be in places like India, where, for several years, misinformation has helped fuel violence against religious minorities.
But as we’ve noted previously, these problems often go well beyond just Whatsapp, making it illogical to place the entire onus for fixing the problems squarely on Whatsapp’s shoulders. There’s also a mountain of cultural and technical issues (like managing what’s sent inside of encrypted messages) that makes the assumption that Whatsapp can “just fix this” overly simplistic. Still, with the app now being used to spread bogus Coronavirus information, the stakes have grown higher, and the calls from regulators and governments to “do more” have grown exponentially.
But again, there’s numerous factors at play, and it has long been clear that any solution is likely complicated and multi-faceted.
In many countries, social media applications have been conflated with the internet itself, creating a walled garden “internet” that consists of just a few apps and sources, creating a less open echosphere where it’s easier than ever to spread disinformation. Often that’s by design as we saw with Facebook’s “Zero Basics” program, which attempted to help the company corner developing nation ad markets by offering free access to a Facebook “curated” selection of content — but not access to the full internet. Add in government censorship, and it gets even more complicated.
That’s not to say Whatsapp shouldn’t continue to experiment with ideas to slow the spread of mis and disinformation. The company has helped promote a World Health Organization bot aimed at providing more accurate information, and it recently donated $ 1 million to the International Fact-Checking Network.
But because of the scope and complexity of the problem, it’s going to take a hell of a lot more than just Whatsapp tweaks to fix a global, surging disinformation problem. It’s going to require a cooperative, global shift in media literacy and critical thinking — combined with mass collaboration between governments, platforms, academics, and users — with nary a single silver bullet anywhere in sight.