Facebook is expanding its automated efforts to prevent suicide.
The company said on Monday it is now using artificial intelligence to identify posts, videos, and Facebook Live streams containing suicidal thoughts. It will also use the technology to prioritize the order its team reviews posts.
In March, Facebook began a limited test of AI-based suicide prevention efforts on text-only posts in the U.S.
Its latest effort will bring the automated flagging tools on text and video posts globally, except in the EU where data privacy restrictions are different than other parts of the world.
In a blog post, the company detailed how AI looks for patterns on posts that may contain references to suicide or self-harm. In addition to searching for words and phrases in posts, it will scan the comments. According to Facebook, comments like “Are you ok?” and “Can I help?” can potentially be an indicator of suicidal thoughts.
If the the team reviews a post and determines an immediate intervention is necessary, Facebook may work with first responders to send help. The social network may also reach out to users via Facebook Messenger with resources, such as links to the Crisis Text Line, National Eating Disorder Association, and National Suicide Prevention Lifeline.
Facebook will use AI to make sure its team reviews posts of those most in distress first.
The move is a part of an effort to further support its at-risk users. Facebook has faced criticism for its Facebook Live feature, where some users have live streamed graphic events, including suicide.
The automated effort’s test which began earlier this year appears to a success so far, according to Guy Rosen, vice president of product management.
“Over the last month, we’ve worked with first responders on over 100 wellness checks based on reports we received via our proactive detection efforts,” said Rosen.
That number does not include reports from people in the Facebook community. Facebook users can still report potential self-harm to Facebook directly. Those posts go through the same human analysis as those flagged by AI tools.