There’s a new in-demand job at Facebook: counterterrorism specialist.
Facebook says it now has more than 150 people who are mainly focused on fighting terrorism on the social network, including a mix of academics, analysts and former law enforcement agents.
This team of specialists has “significantly grown” over the last year, according to a Facebook blog post Thursday detailing its efforts to crack down on terrorists and their posts.
“Really my job is how do we disrupt what the terrorists are trying to do and how do we get ahead of it,” Brian Fishman, Facebook’s counterterrorism policy manager, told CNN Tech.
Fishman, a former professor who published a book on ISIS and al Qaeda, joined Facebook just over a year ago. His team works with other parts of Facebook to build tools to detect terrorist activity and prevent the spread of propaganda.
Facebook says it recently began experimenting with artificial intelligence to better understand if the language used in posts is terrorist propaganda, highlighting the potential for new technology to help solve the problem.
Facebook also uses image matching tools to prevent users from uploading a known terrorist video or picture as well as algorithms to spot clusters of related profiles, pages and groups supporting terrorism.
“The [terrorists’] tactics change quickly,” Fishman says. “What we see is terrorist actors and their supporters start to understand the kind of things that we’re doing and they try to change what they do and we have to be reactive to that.”
“That,” he adds, “is the struggle.”
Facebook and its peers in the tech industry have been pressured by lawmakers in recent months to do more to combat terrorists on their platforms.
British Prime Minister Thereasa May, in particular, has called for regulating “cyberspace to prevent the spread of extremist and terrorism planning.” After the terror attack in London earlier this month, a Facebook official said it’s working to make the social network “a hostile environment for terrorists.”
Multiple lawsuits have also been filed against Facebook, Twitter and Google, accusing them of providing platforms for ISIS.
Facebook’s post on Thursday provided greater detail about how it spots and polices terrorism content on a site used by nearly two billion people each month. The team felt a need to put out the information now in light of recent attacks and added scrutiny on tech companies.
“Certainly we’ve seen questions about what should social media companies be doing,” Monika Bickert, Facebook’s director of global policy management, told CNN Tech. “Part of this is telling our community, ‘This is our commitment and this is how we are living up to it.'”
In the post, Bickert and Fishman admit “AI can’t catch everything.”
“Figuring out what supports terrorism and what does not isn’t always straightforward, and algorithms are not yet as good as people when it comes to understanding this kind of context,” they write.
To that end, Facebook is building up its in-house counterterrorism expertise, partnering with researchers and governments, and leaning on its team of community moderators.
Facebook said last month it plans to hire 3,000 reviewers to combat violent videos in addition to the 4,500 people already on the community operations team. Some experts criticized it as a drop in the bucket considering how much content Facebook’s users share.
Facebook is now publicly owning up to how difficult this problem is to solve — and calling for help from its community.
Earlier in the day Thursday, Facebook asked its users to weigh in on a series of “hard questions” facing the company, including the best approach for “keeping terrorists from spreading propaganda online.”
“As we proceed, we certainly don’t expect everyone to agree with all the choices we make. We don’t always agree internally. We’re also learning over time, and sometimes we get it wrong,” Elliot Schrage, Facebook’s VP for public policy and communications, wrote in a blog post.
“But even when you’re skeptical of our choices, we hope these posts give a better sense of how we approach them.”