Meta’s Oversight Board is set to evaluate the company’s handling of deepfake pornography amid growing concerns that artificial intelligence is fueling a rise in the creation of fake, explicit imagery as a form of harassment.
The Oversight Board said Tuesday that it will review how Meta addressed two explicit, AI-generated images of female public figures, one from the United States and one from India, to assess whether the company has appropriate policies and practices in place to address such content — and whether it is enforcing those policies consistently around the world.
The threat of AI-generated pornography has gained attention in recent months, with celebrities including Taylor Swift, as well as US high school students and other women around the world, falling victim to the form of online abuse. Widely accessible generative AI tools have made it faster, easier and cheaper to create such images. Meanwhile, social media platforms make it possible to spread these images rapidly.
“Deepfake pornography is a growing cause of gender-based harassment online and is increasingly used to target, silence and intimidate women – both on and offline,” Meta Oversight Board Co-Chair Helle Thorning-Schmidt said in a statement.
“We know that Meta is quicker and more effective at moderating content in some markets and languages than others,” said Thorning-Schmidt, who is also the former prime minister of Denmark. “By taking one case from the US and one from India, we want to look at whether Meta is protecting all women globally in a fair way.”
Meta’s Oversight Board is an entity made up of experts in areas such as freedom of expression and human rights. It is often described as a kind of Supreme Court for Meta, as it allows users to appeal content decisions on the company’s platforms. The board makes recommendations to the company about how to handle certain content moderation decisions, as well as broader policy suggestions.
As part of its review, the board will evaluate one instance of an AI-generated nude image resembling a public figure from India that was shared to Instagram by an account that “only shares AI-generated images of Indian women.”
A user reported the image for being pornographic, but the report was automatically closed after it did not receive a review by Instagram within 48 hours. The same user appealed Instagram’s decision to leave the image up, but the report was again not reviewed and automatically closed. After the Oversight Board told Meta of its intention to take up the case, the company determined it had allowed the image to remain in error and removed it for violating bullying and harassment rules, according to the board.
The second case involves an AI-generated image of a nude woman being groped, which was posted to a Facebook group for AI creations. The image was meant to resemble an American public figure, who was also mentioned in the image’s caption.
The same image had been posted previously by a different user, after which point it was escalated to policy experts who decided to remove it for violating bullying and harassment rules, “specifically for ‘derogatory sexualized photoshop or drawings.’” The image was then added to a photo matching bank which automatically detects when rule-breaking images are reposted, so the second user’s post was automatically removed.
As part of this latest review, the Oversight Board is seeking public comments — which can be submitted anonymously — about deepfake pornography, including how such content can harm women and how Meta has responded to posts featuring AI-generated explicit imagery. The public comment period closes on April 30.