An asymmetrical war against the press

When we use the term “fake news” it is not only self-defeating, it oversimplifies a very complex problem.

A year ago, this wasn’t the case. The term actually meant something. It described a particular type of website that used the same design templates as professional news websites but its contents were entirely fabricated.

But earlier this year, the term started to become meaningless. It became used to describe any piece of information that someone else didn’t like. Increasingly the term has become weaponized by politicians who use it to undermine independent journalism in an effort to reach the public directly through their own channels.

This is not just a US phenomenon. Research by the Columbia Journalism Review shows that people in three other countries increasingly believe that the “mainstream” media peddle fabricated stories.

In countries where a free press is a luxury and free speech is not guaranteed, this phrase is being as an excuse to clamp down on both. Terminology matters, and using it simply “because everyone else uses it” is no longer good enough.

The term is also woefully inadequate for describing the complexity of the situation. When we think about the Facebook posts created by Russian accounts during the US election, would we consider them news? What about the image of a shark swimming up a Texas highway during Hurricane Harvey? That shark was real, but it was not in Houston during that hurricane.

We live in a time when our information streams are polluted and there are many different types of information. They move and shift. Some types are visible; others are harder to spot. Some we would all agree are problematic — manipulated images created during a breaking news event, for example, designed to confuse and to hoax. But what about satirical news websites? What about misleading headlines designed solely to drive traffic?

We need to rethink our vocabulary.

In our recent report, “Information Disorder,” commissioned by the Council of Europe and published by Harvard’s Shorenstein Center, we proposed three different terms as a way to think and talk about this issue.

Misinformation is when there is an unintentional mistake, such as the poor use of statistics or quotes, or when an old image resurfaces (for example when people shared the shark photo mentioned above).

Disinformation is when false or manipulated information or imagery is deliberately used to do harm to someone. (The Facebook ads created by Russia and targeted at US voters during the presidential election would be an example of this).

Malinformation is when genuine information is used to cause harm to someone (for example revenge porn).

Rumours, conspiracy theories and fabricated information are nothing new. As Sun Tzu explained 25 centuries ago, “all warfare is based on deception.” False information is a part of our lives, whether it is individuals lying to save face or prevent hurt feelings, politicians making unrealistic promises during election campaigns, corporations using false claims to damage their competitors or the media disseminating misleading stories to gain a wider audience.

However, social media has added an entirely new dimension to the phenomenon, most significantly because the power dynamics have changed. Now anyone in the world can easily create and disseminate false information, and with the help of bots, organized groups, or targeted ads, it is easy to amplify it.

Given how social media have made our information consumption a public performance for our audience, it is now even less likely for most people to swim against the tide and challenge each other. In our increasingly lonely lives, who would want to feel lonelier?

Recent revelations about foreign meddling in the US elections have exposed the fact that we’re targets in an active information war. Previously these types of campaigns were fought via sophisticated and expensive communication technologies, such as short-wave radio or transnational satellite television.

Now significantly less powerful agents can harm large institutions or established individuals with few resources. It is asymmetrical warfare.

There are two new and unique aspects of social media which have changed the game: Firstly, disinformation can be cheaply amplified through committed volunteers, paid agents or bots. Secondly, our information sources are becoming increasingly social, and therefore much more visual, emotional, and performative. And as trust in institutions decreases, people are turning to their closest networks of family and friends for information.

This has created a perfect environment for the spread of disinformation around the world.

There are significant and long-term concerns. Social media has become the most powerful vehicle for politicians around the world to undermine traditional media, thereby damaging the very foundation of the idea of representative democracy: informed voters.

The combination of social media and television is a hotbed for sensational politicians who benefit from populations polarized by political, economic, religious, racial or ethnic divisions.

Sophisticated methods — especially the use of micro-targeted advertising via social media — must be considered as a recipe for electoral success around the world.

So what can be done? The truth is that there are no easy solutions. These trends are the symptoms of significant social and economic shifts globally. But ultimately, it is only logic and critical thinking which could save us from the trap of manipulation.

Research shows the more educated people are, the less likely they are to be affected by information warfare. Technologists, policy-makers, and researchers are working hard to find short-term solutions. However, none of these will have a long-term effect if we don’t overhaul public education, especially regarding information literacy.

Exit mobile version