There are limits to how much Facebook can do for us

We’re asking a lot of social media companies these days. Sure, we want updates from our friends. And we want the latest news. But we also want to avoid the least savory bits of content swirling online, including grisly violence and foreign propaganda.

Lately, the call has been particularly strong for social media companies to help with the last element — the elimination of unwanted content peddled by unsavory actors. Weary of news feeds and profile pages punctuated by graphic violence and fake news, many Americans — especially those frustrated by the role that fake news may have played in President Trump’s election — have, in a sense, told social media companies: “Fix this.”

The companies are clearly hearing that call. Thursday, Facebook published its first-ever public report on “Information Operations and Facebook.” Facebook acknowledges that the types of information operations long perpetrated by governments against one another are now occurring on Facebook’s platform. These include spreading fake news and manipulating political discussion.

But not all concerns about content on social media are created equal. For some, such as the posting of graphic violence, “fix this” may simply demand greater attention and investment of resources by tech companies. For others, such as the spreading of fake news, we — as a society — need to figure out what, exactly, we mean by “fix this” before we can expect the companies to rise fully to the challenge.

Two recent news stories that sat side-by-side on the front page of The New York Times and garnered simultaneous attention throughout the media illustrated this distinction. One covered a vicious murder in Cleveland, uploaded and broadcast on Facebook for all the world to see and viewed thousands of times before it was removed. The second discussed France’s elections and the swirling of Russian government-funded fake news stories that aimed to revive one candidate’s sagging fortunes by claiming, falsely, that he had surged in the polls.

Social platforms are democratizing, breaking down barriers to building communities and enabling the sharing of everything from family photos to political views to, yes, videos of point-blank murders and wildly misleading campaign coverage.

Not long ago, a murderer who sought to disseminate footage of his awful crime would have faced television news programs unwilling to show such horrific material. And no reputable newspaper would have published a story claiming a candidate’s surge that was clearly belied by the polls. Today, there are no gatekeepers, no barriers to access: there is, instead, a direct feed to a global audience.

Sharing graphic violence and foreign-funded electoral propaganda is not what the creators of social media platforms intended. They sought to forge communities that transcend geography, and they have succeeded — to the benefit of all of us who have instantaneous access to a wealth of experiences, ideas and views to which we once would never have been exposed.

Social media platforms give voice to dissidents and enable the unprecedented sharing of stories that demand a reckoning from the powerful, from the victims of unjustified police violence to an airline passenger bloodied for simply clinging to the seat he paid for. But, as with all widely available tools, bad actors have exploited these platforms, seeking not to share ideas with communities but to shock them with violence; not to engage in political dialogue but to manipulate political outcomes.

But while the two recent stories are alike in one way — exhibiting the exploitation of valued and ubiquitous platforms for nefarious and unwanted purposes — they represent quite different types of challenges for the social media companies that seek to patrol their own platforms to ensure that they fulfill their intended vision, and for all of us who use those platforms.

There is no disagreement on what should happen to a violent video uploaded by a murderer: it should be taken down, as quickly as possible to prevent its viewing and reposting. The video violates every reputable social media platform’s terms of service; and no voice in our society calls for its protection.

When Americans, weary of seeing such violence online, ask tech companies to “fix this,” the challenge for each platform is, in a sense, simple: to implement this consensus as effectively as possible. This is in part a resourcing challenge, with greater investment allowing more rapid review and reaction once a video is flagged by a user as requiring priority consideration for removal.

It is in part a technical challenge, with the potential for automated review, informed by machine learning, to accelerate a company’s responsivity. All in all, the basic approach is to restore a gatekeeper, at least for a video so horrific as to merit gatekeeping.

But the challenge posed by Russian-funded fake news is more complicated, because it is still unclear exactly what we mean when we ask tech companies to “fix this.” Do we want a social media company to be the gatekeeper determining which news is fake — or, tougher still, which news qualifies as misleading — and essentially censoring it from millions or billions of users? That seems a heavy burden to bear.

Perhaps the right approach is for certain stories to be flagged as meriting heightened skepticism from readers, based on some potential combination of user input, company review, and algorithmic analysis. This appears to be the direction in which companies such as Facebook and Google are moving as they grapple with continuing concerns, and it is at the heart of some mitigation measures that Facebook touted in its report.

Or maybe the best way to address this challenge is to encourage partnerships between social media companies and fact-checking websites of the type that Facebook has forged with Snopes.com, so that the latter can call attention to particularly egregious distortions of the truth.

Alternatively, there may be a role for financial transparency — that is, if a reader can see who paid for a story to be written and disseminated, that insight can inform the reader’s assessment of its validity.

Social media platforms have become a microcosm of our world: they are a place where violence is propagated and elections are manipulated, just as they are a place where ideas are shared and artistic collaborations nurtured. And, just as in the physical world, some problems simply require a clear response as swift and decisive as our available resources and best technologies allow, whereas others demand that we grapple with what we view as the proper roles and responsibilities in managing the all-important transmission of ideas throughout our society and polity.

In other words, some of these challenges, such as the rapid removal of graphic violence, may indeed rest with the tech companies themselves. But others are a shared challenge for our broader society — for us to debate and determine what we really mean when we say: “Fix this.” That is, in essence, how the report from Facebook ends: with a plea for governments, journalists and civil society to determine their own appropriate roles in tackling the challenge posed by fake news.

We’re right to ask a lot of the tech companies whose platforms are now so central to daily life. But we need to ask a lot of ourselves, too.

Exit mobile version