“If you want to liberate a society, just give them the Internet,” Egyptian revolutionary and Internet executive Wael Ghonim told CNN’s Wolf Blitzer five years ago.
Ghonim spoke from experience. Back then, he ran a Facebook page that served as an information hub for the protesters who brought down the regime of dictator Hosni Mubarak during the run-up to the Egyptian revolution. Afterward, social media companies were lauded throughout the democratic world for empowering movements for justice, freedom and democracy.
Fast forward five years to today: Many Egyptian revolutionaries are in jail, and in many ways, our romance with social media and revolution has soured. The Internet remains a powerful tool for people fighting for social justice and human rights around the world, but we’ve witnessed the extent to which it also can be powerful in the hands of dictators and terrorists.
With headlines swirling about the Islamic State’s use of social media to recruit people from across the globe — sometimes mobilizing them to kill on ISIS’ behalf — we’re left with a challenge: How do we in the democratic world prevent terrorists from capitalizing on the Internet without compromising our own freedom?
How do we prevent our fears from leading us to destroy the very features activists and journalists around the world have come to depend upon?
Earlier this month, a delegation from the White House flew to Silicon Valley to talk with some of the world’s most powerful tech executives about “how to make it harder” for terrorists to use their products and platforms. Unfortunately, civil liberties groups and human rights experts were not invited.
To their credit, some companies such as Apple, Microsoft, Google and Facebook have joined forces with civil liberties groups in an attempt to persuade the Obama administration not to push anti-encryption measures that would enable government and law enforcement officials to access our secure communications.
If such “back doors” are introduced, it’s inevitable that criminals and repressive regimes will also be able to exploit them, enabling them to access to people’s private communications, identify journalists’ sources and gain knowledge of activists’ plans.
Yet pressure is mounting from politicians across the democratic world for global platforms, such as Twitter, Facebook and YouTube, to increasingly monitor and censor users, and flag more content for government agencies. These companies publish “transparency reports” about the number of official government requests they receive, and this data shows that calls to restrict online speech have skyrocketed around the world over the past five years.
Transparency doesn’t stop the government agencies from making these demands, but it at least informs the public about who exactly is making them, what kind of speech different governments are trying to restrict and the extent to which companies are complying. After all, the most insidious type of censorship occurs when people don’t even know it is happening or who is responsible for it. And, that’s exactly what’s starting to happen.
More and more, governments are asking companies to censor content or disable users’ accounts through informal and extralegal processes, where there is no transparency or accountability.
As it stands, any individual or organization can report content that seems to be in violation of a company’s terms of service and community guidelines. Facebook employs hundreds of people around the world, proficient in a range of languages, to review these “flags” and decide whether content should be taken down or if a user’s account should be disabled.
YouTube has what it refers to as trusted “superflaggers,” who include authorities such as the Counter Terrorism Referral Unit of the UK’s Metropolitan police. The team’s reports are prioritized, and they can flag large numbers of videos at a time.
Executives at more than one major Internet company have privately confirmed to me, on condition that they or their companies not be named publicly, that officials in a range of countries, including India and Turkey, have learned to use such private flagging mechanisms.
That way, they can get content removed by company staff without having to issue formal requests requiring approval by their ministry or a court. Such requests’ validity is carefully reviewed for consideration by company lawyers who may potentially reject them, and which are then included by companies in their transparency reports.
In the United States, where the First Amendment protects a lot of speech that is prohibited by companies’ terms of service, private enforcement is the main way of dealing with speech that authorities find problematic but which is constitutionally protected.
There is no accountability in this process and inadequate recourse when company employees make a mistake, deleting content posted by activists or journalists that their opponents and detractors have flagged as violations. Innocent people are often caught in the crosshairs. Late last year, several women named Isis claimed they were shut out of Facebook. Two of them got their accounts restored only after the news media reported on their cases.
Even Ghonim’s own Facebook page was disabled for terms of service violation, then restored thanks to his strong connections in Silicon Valley and friends in the international human rights community. Citizen journalists operating from Syria and elsewhere in the Middle East have reported problems with content being removed.
Given this, companies should include in their “transparency reports” information about the volume and nature of content removed and accounts shut down as part of their terms of service enforcement and flagging processes.
Right now, no major U.S.-based Internet company reports this information. Governments must also be transparent and be held publicly accountable for what they are asking companies to do — be it officially or unofficially.
That is not to say companies shouldn’t cooperate with governments to fight crime and terror. But there is a reason why open and free societies maintain the rule of law, due process and strong legal and constitutional protections for freedom of speech. If these vital elements are replaced by private processes with no transparency, accountability or effective mechanisms for affected parties to appeal and remedy their deletions or deactivations, power will inevitably be abused with no consequences for the abusers.
The victims will include many law-abiding peaceful people who have every right to express themselves but whose activities happen to be unpopular, misunderstood or offensive to powerful institutions.
This will be excellent news for regimes in Egypt, Turkey and elsewhere that already use broadly worded anti-terror laws to jail journalists and activists. Social media’s power as a tool for journalists hoping to expose injustice and for activists trying to build movements will corrode.
The Arab Spring may have failed in most countries. But if the rights of social media users are not protected and respected, the next movement could be deleted before the world ever learns about it.