Throughout history, the media has been used to manipulate and influence political outcomes. Julius Caesar used war journaling, or a written firsthand account of war and its rationale, to justify the invasion and destruction of the Gallic tribes two millennia ago. His heir, Octavian, used a disinformation campaign to convince the Senate to declare war against Mark Antony.
In later years, new communications tools, such as the printing press and moveable type, made it easier for philosophers including John Locke and Thomas Paine to influence new forms of democracy, which, while imperfect, have been more successful than other forms of government.
What these historic figures didn’t have access to is digital technology, which democratizes media but also eases the spread of disinformation. The internet allows all who have access to participate, but it also amplifies the effectiveness of media manipulation via automation. Botnets (which are made up of networks of individual, social media and bot accounts that appear to be owned by real people) act as a force multiplier, distributing disinformation in the form of “fake news” on a massive scale. We saw this in the 2016 US presidential election, as Russian propaganda flooded social media platforms to divide the nation.
However, a broad community of good actors, including foundations and nonprofits committed to journalistic integrity, are emerging to promote media literacy and good-faith reporting. As a news consumer and a philanthropist (not an expert), I’m invested in supporting these knowledgeable institutions and initiatives to ensure that we have access to fair and accurate reporting. I care about this issue because I believe a strong, vigorous and trustworthy press is the immune system of democracy.
Last year that immune system failed, which is why we need to focus on rebuilding and sustaining a trustworthy press now more than ever. While a complex task, given the evolving uses of technology in spreading disinformation, in order to remedy this issue we must create a more formalized journalistic code of ethics for the digital age.
There are several governing principles for such a code — more than can be covered in this piece. In fact, the Society of Professional Journalists (SJP) has a comprehensive code of ethics for news organizations to follow, and the Online News Association has a “Build Your Own” ethics code project, designed to let news organizations build codes of ethics for a digital age.
However, a few principles stand out as fundamental to trustworthy reporting on digital channels. First, all media organizations with a digital presence should commit to a standard of honest reporting — one that places fact checking and the use of multiple sources above speculation and single sources. SJP already provides guidelines to this effect, advising journalists of any medium to always verify information, use original sources, avoid conflicts of interest and explain moral processes to audiences.
A second principle — already standard in traditional media — should be that if a digital journalist makes a mistake, the outlet should admit it and correct it immediately. After all, if a mistake is made, the journalist is accountable, not just to the news organization, but to the readers at large. A correction should be done in a way that is equal or greater than the mistake that was published. For example, if the mistake is shared on a media outlet’s website and across its social media platforms, the correction should be shared in the same manner. As a news consumer, I’d like to see this happen in an effective and timely manner, so that the false information has a short digital lifespan.
A third principle, building on an idea from The Trust Project’s Director Sally Lehrman and Google News Vice President Richard Gingras, encourages hyperlinking statements of facts, statistics or quotes to their original sources. This allows audiences to click through and assess the veracity of an article based on the research that has gone into its preparation. “An effective system would also allow audiences to alert editors to perceived inaccuracies … and follow corrections,” Lehrman and Gingras note.
A final principle would make editors commit to drafting headlines that accurately represent the content of the article, not present the reader with a false understanding of what the piece is about. For example, opinion editorials should be labeled as such, not purport to be news articles. I’ve also observed a number of headlines that express the unverified claim as a question. This can leave the reader misinformed. While headlines should catch a reader’s attention, they shouldn’t be deceitful.
We need help from social media platforms to reinforce these emerging standards and to counter media manipulation. Google, Facebook, Twitter, Bing and others are working to address this. Their work includes a commitment to using “Trust Indicators,” launched on November 16 by the Trust Project, a nonpartisan initiative from Santa Clara University’s Markkula Center for Applied Ethics. The Trust Indicators are a new set of transparency standards that help news consumers easily assess the quality and reliability of journalism. According to the Trust Project, “each indicator is signaled in the article and site code, providing the first standardized technical language for platforms to learn more from news sites about the quality and expertise behind journalists’ work.”
Google’s Gingras has stated that “partnering with the Trust Project since its conception has been of significant importance to Google, in large part because … the indicators can help our algorithms better understand authoritative journalism — and help us to better surface it to consumers.” Facebook has already begun displaying the Trust Indicators to a small group of publishers, with plans to expand more broadly over the coming months.
Another initiative that many of the tech companies are also involved with is the News Integrity Initiative at the CUNY Graduate School of Journalism. The school describes it as “a $14 million fund supporting efforts to connect journalists, technologists, academic institutions, nonprofits and other organizations from around the world to foster informed and engaged communities, combat media manipulation and support inclusive, constructive and respectful civic discourse.” Initiatives like the Trust Project and The News Integrity Initiative are important because solutions are not going to happen in a silo — it will take many players working together.
My own experience as a supporter and member of these initiatives suggests that getting to a place where news consumers can easily assess the credibility of the news will take considerable time and care. First, it’s a bad idea to publish information that helps bad actors game a system. For example, if you disclose keyword or statistical filtering methods, that tells bad actors a lot about how to work around such protections. Second, there are unknown liability and legal issues. Different countries have varying laws about defamation, free speech or privacy. What’s offensive in one country might be illegal in another. What’s worse is that bad actors engaging in media manipulation are already attacking people who are trying to get honest reporting out there, like Snopes and Politifact.
As with any complex undertaking, there are a lot of devils in the details here. But there is also the potential to give both news organizations and consumers the means for achieving increasingly smart media literacy, pushing back against those who wish to do us harm.