Archives

Categories

Tags

Bearers of Bad News: The Unchecked Spread of Disinformation through Messenger Platforms

Last week, elections in the Indian state of Karnataka caught the world’s attention. For many, the results could hold a clue to the fate of Prime Minister Modi’s Hindu nationalist Bharatiya Janata Party going into next year’s national election. Yet, the draw of the story was not the election itself – it was instead the prodigious use and misuse of WhatsApp messaging leading up to the May 12 election.

It is not the first time private messaging platforms like WhatsApp or Facebook Messenger have stood accused of enabling disinformation, “fake news,” and, sometimes, violent clashes in already-polarized societies. While attention remains on the more public social media platforms like Facebook and Twitter, private, “dark social” messengers may be all the more to blame for the viral spread of disinformation, which is nearly impossible to track or counter once it is being circulated.

Safe from prying eyes through secure messaging

Emails, text messages, and group chats—the digital version of person-to-person communication—have exploded as a means of communication with the spread of mobile phones, bringing a host of platforms promising to keep users more connected than ever. Much like word of mouth, however, the very nature of these connections makes them nearly impossible to track, inspiring the term “dark social” for peer-to-peer messaging. The name indicates the wide blind spot these mechanisms create for those hoping to trace the source of online traffic.

For the most popular of these services, it is their privacy that makes them valuable. Messaging services like WhatsApp, Telegram, and Signal promise security and built-in encryption to shield users and their messages from potentially prying eyes, a feature that evokes frustration and threats from governments like Iran and Russia. Free expression and civil rights activists have fought relentlessly for these private, secure connections, and continue to do so as governments threaten platform bans at every turn.

In principle, these protections are a good thing. Secure platforms keep users in authoritarian environments safe and able to communicate, organize demonstrations, or even coordinate relief efforts. Users can remain anonymous, and an internet connection is all that is required to send a message to hundreds with the push of a button. This offers the ability to engage and share information rapidly within a private, trusted network, or more widely through options like expanded groups (WhatsApp), content-pushing “channels” (Telegram), or “Official Accounts” (WeChat), to name a few. News outlets and political parties join the ranks of family and friends in embracing the platforms, engaging and expanding their audiences with updates and interactive discussions more easily than ever before.

The dark side of the media

As is frequently the case with technology developments, however, the growth of private, encrypted messaging services has also brought with it nefarious users and results. Most notoriously, terrorist cells have used the platforms to evade tracking, including busy Telegram channels churning out ISIS propaganda at its peak. In the case of large, interactive groups, it becomes impossible to track the start of a message, be it a shared link, an image, or the full text itself. What’s more, while some are intentionally false, like the fabricated opinion poll leading up to the Karnataka election, many also come from rumors run amok—ranging from false alarms of an impending storm in South Africa, to a mob attack on suspected kidnappers in Brazil, to violent clashes in sectarian Sri Lanka.

The “dark side of the media” is far from new. “Instead of serving the public and speaking truth to power,” write Mark Nelson and Jan Lublinski, “many media may act as mouthpieces of the powerful, repeat rumors without verification, discriminate against minorities, and feed the polarization of societies.” This is just as true and concerning in 2018’s world of digitization and smart phones as it was in 1994 Rwanda, now in new, more subtle formats.

Meanwhile, more than five billion people around the globe have ready access to a mobile phone, and 1.5 billion of them are on WhatsApp, sending a total of 60 billion messages per day. In a matter of seconds, an individual can update hundreds on, for instance, developing demonstrations against a corrupt government. But also in a matter of seconds, an individual can sow confusion and mistrust through conspiracy theories, or set off a firestorm of sectarian riots and mob violence. All with little to no means of tracking the origins of the message, or discrediting false information before it reaches hundreds more.

Limited by their own security protocols, platforms turn to users to counter disinformation

In Sri Lanka, the response was to shut down the platforms entirely, hoping that if users cannot access Facebook and WhatsApp, the offending messaging will be cut short and violent clashes alongside them. Yet, cutting off conversation in the midst of such furor leaves audiences to the dangers of gossip and limits means of refuting false statements, not to mention communication for much-needed aid and relief. Rights activists, journalists, and everyday citizens living under the watchful eyes of authoritarian governments would be quick to agree, rescinding the privacy and protection of peer-to-peer platforms cannot be the answer.

For now, responses to the spread of disinformation on private platforms rely entirely upon the audience exposed to it: group members reaching out to a wider field or fact-checkers like BOOM to verify information, reporting questionable content, and boosting critical thinking and media literacy skills. While WhatsApp has announced increased efforts to block possible spammers—hoping to catch some of the worst offenders—parent company Facebook has focused its response to these issues on its more public social media platform. Unfortunately, its current tactics of limiting advertising opportunities for “fake news” producers or using algorithms to bury bad content have limited applicability in the realm of peer-to-peer messaging, where incentives are unmonetized and, in the case of rumor run amok, entirely non-existent.

Still, media literacy programs are no silver bullet. And even if fact-checkers had the bandwidth to cover innumerable private groups and constant rumor, once the questionable message has reached them, it has already passed through too many (untraceable) hands to count. The best response, then, might be the one we’ve pushed all along—as one panelist remarked at the recent PeaceTech Summit, “the solution to hate speech is not less speech, it’s more speech.” The solution to disinformation and “fake news” is not cutting off all information, but rather better supporting trustworthy, independent information.


Kate Musgrave is the Assistant Research and Outreach Officer at the Center for International Media Assistance. Find her on Twitter at @kate_musgrave.

Blog Post

Comments (0)

Leave a comment