“Revenge porn” is sexual abuse in a new digital form. A recent study shows that 10% of women under 30 years old in the United States have been victimized by the misuse of their intimate images. Facebook is one of many platforms that host this kind of abuse despite its efforts to tinker with the ways users can report unauthorized content.
This spring, Facebook rolled out a feature allowing users to request that Facebook take down any unauthorized intimate images that are being shared on the platform. And last week, Facebook announced a pilot program for users in Australia to upload nude images of themselves they suspected were being shared without their permission. Facebook would then generate a digital fingerprint of each image so that it couldn’t be shared on the platform.
A better solution would be to give users the power to prevent any images that depict them from ever being posted on the platform.
In the press release about the Australian program, Facebook explains: “We don’t want Facebook to be a place where people fear their intimate images will be shared without their consent.” The company said the program would involve “specially trained” employees who would review the photos to ensure that claims were legitimate.
Understandably, some people were skeptical about Facebook’s ability to adequately protect the privacy of these photos. One headline, for example, called it “Facebook’s latest ‘horrible idea'” another joked, “Send n00dz” and Stephen Colbert ridiculed the idea as “fighting fire with fire.”
Although some people might find a bit of relief from this program — and I applaud it for that — there are many others who could be traumatized by having to send their photos to Facebook employees, even if they are “specially trained.” Training doesn’t always equate to professionalism, as we saw with reports of TSA agents “laughing” at passengers’ body scans.
Further, many victims don’t know that their images are being shared until after it happens — if they ever find out. Some victims are depicted in images they didn’t even know were being produced — abusers get images from hidden cameras, hacked webcams, and stolen passwords.
Surprisingly, performers in the legal US pornography industry have more control over their nude images than Facebook users. Facebook’s general policy is to post photos first and then deal with illegal content or requests for the image to be taken down later. In contrast, porn performers need to provide written consent—by signing a “model release”—before their images are ever published and distributed. Internet companies like Facebook are not considered publishers, so they are usually not legally liable for violations of individuals’ privacy, personality, or publicity rights—the “sponsored stories” class action lawsuit against Facebook was a notable exception.
Online privacy doesn’t have to be such a free-for-all. Imagine this: I could get a request for permission every time you try to post a photo of me—you’d have to tag everyone in your photo and facial recognition could help too. Right now, I can get a notification if you tag me, but the photos have already been posted. Maybe you could post a photo with my face blurred out until you have my permission to show the full image. Maybe a setting could let me always (or never) trust you to post photos of me without asking each time.
Unfortunately, it’s unlikely that Facebook will do this because more photos generate more user engagement and thus more profit. Even if some of those “engagements” are sexual violations, they still generate revenue. Preventing misuses of a permission-based system would be costly. They’d need to invest in a creative combination of human and machine decision-makers to reject fraudulent tags on photos and to protect newsworthy images.
The “free speech” argument against this system is a red herring. Facebook is a private company that can, should, and does, restrict the speech that circulates on its platform. There is some speech they are compelled to restrict because of local laws, and Facebook also has its own “community standards” for restricting users’ speech on the platform. And why should social media always allow your free speech rights to trump my privacy rights?
Inevitably, getting permission to post everyone’s photos would slow things down and probably result in fewer personal images shared on the platform. People might have to wait minutes, hours or longer for their complete un-blurred group photo to post publicly.
It’s worth it because it could prevent people from being victimized by nonconsensual intimate image distribution, and it’s worth it for Facebook if they want to earn back users’ trust and genuinely help protect their autonomy and safety on the platform.
Requiring consent for posting photos of other people might have prevented incidents like “Marines United.” In this case, US Marines engaged in nonconsensual sharing of thousands of nude images of their female coworkers and other women in a secret Facebook group. With a permission-based system, Facebook’s facial recognition probably could have identified at least some of the people depicted and alerted them, preventing their photos from ever being shared and probably exposing the group much sooner.
Social media allows people to commit sexual abuse with a click of a button. So, rather than reacting after the damage is done, Facebook should invest in more preventative measures by letting users decide if images that depict them should ever be posted in the first place.
Correction: An earlier version of this article misstated that the images in the “Marines United” case were nonconsensual. It has been changed to reflect that it was the sharing of the images that was nonconsensual.