Claire Wardle rejects the phrase “fake news.” She dislikes it so much that she even censors it in conversation.
“I refuse to use ‘F-asterisk-asterisk-asterisk news,'” Wardle told Brian Stelter in this week’s edition of the Reliable Sources podcast.
Wardle is the Strategy and Research Director of First Draft News, a nonprofit research group housed at the Shorenstein Center at Harvard University.
They are experts in the verification of user-generated content. Those at First Draft analyze the “information disorders” that affect the social web, helping reporters and the average internet user identify online hoaxers and foreign propaganda agents who threaten the democratic process in the United States and around the world.
When it comes to defining the wide range of issues regarding the spread of inaccurate information online, word choices matter. Wardle maintains that the phrase “fake news” is “woefully inadequate” to describe the issues at play.
Listen to the whole podcast here:
“Instead, I talk about information pollution,” she told Stelter. In a new report commissioned to First Draft by the Council of Europe, Wardle and co-author Hossein Derakhshan distinguish between three different types of problems: mis-information, dis-information, and mal-information.
How are they defined?
Mis-information is the kind of false information disseminated online by people who don’t have a harmful intent.
“That might be my mom sharing a shock photo from Hurricane Irma,” without realizing that it’s actually an old photo from an unrelated event, Wardle said. “She just hasn’t checked it properly.”
Dis-information is false information created and shared by people with harmful intent. False news reports around presidential candidates ahead of the 2016 election fall into this category, and so does their social media amplification from malicious accounts.
Mal-information is the sharing of “genuine” information with the intent to cause harm. That includes some types of leaks, harassment and hate speech online. An example of mal-information, says Wardle, are Hillary Clinton’s leaked emails.
“Those emails were genuine, that wasn’t false information, but the people who were spreading that were trying to cause harm by moving them from a private space to a public space,” she explained.
As Congress investigates the full scope of Russian propaganda on social media in the context of the 2016 presidential election, Wardle warns that fake ads and Facebook pages, or malicious bot accounts on Twitter are just one sliver of a much more nuanced issue, that has morphed dramatically over time.
“Six months ago, [our focus] was Macedonian teenagers,” she told Stelter.
Also, “information disorders” look different based on geographic location.
In Kenya, India, or the Philippines, Wardle explains, one of the main concerns is “rumors about science and health” that are circulating on closed-messaging apps like Facebook Messenger and WhatsApp. In Asia, similar apps like Line, Viber and WeChat are all breeding grounds for information pollution to spread.
“This truly is a global problem,” Wardle told Stelter. Oversimplifying “fake news” has broader implications, she argued.
That phrase, in fact, “is being used globally by politicians to describe information that they don’t like, and increasingly, that’s working,” she said. The news industry has to recognize that the phrase “fake news” has become “weaponized,” and that “we have a responsibility to just not use that word,” she added.