Meta’s Oversight Board Investigates AI-Generated Explicit Images of Public Figures
The Oversight Board, Meta’s semi-independent policy council, is turning its attention to how the company’s social platforms are handling explicit, AI-generated images. The board announced investigations into two separate cases over how Instagram in India and Facebook in the U.S. handled AI-generated images of public figures after Meta’s systems fell short on detecting and responding to the explicit content.
The Cases Under Investigation
In the first case, a user reported an AI-generated nude of a public figure from India on Instagram as pornography. The image was posted by an account that exclusively posts AI-created images of Indian women, and the majority of users who react to these images are based in India. Meta failed to take down the image after two reports, and the explicit AI-generated image remained on Instagram until the user appealed to the Oversight Board.
The second case relates to Facebook, where a user posted an explicit, AI-generated image that resembled a U.S. public figure in a Group focusing on AI creations. In this case, the social network took down the image as it was posted by another user earlier, and Meta had added it to a Media Matching Service Bank under the “derogatory sexualized photoshop or drawings” category.
Oversight Board Co-Chair Helle Thorning-Schmidt stated:
“We know that Meta is quicker and more effective at moderating content in some markets and languages than others. By taking one case from the US and one from India, we want to look at whether Meta is protecting all women globally in a fair way.”
The Growing Concern of Deepfake Porn and Online Gender-Based Violence
Some generative AI tools have expanded to allow users to generate porn, raising ethical concerns and issues of bias in data. In regions like India, deepfakes have become a significant concern, with women being more commonly subjected to deepfaked videos. Deputy IT Minister Rajeev Chandrasekhar has expressed dissatisfaction with tech companies’ approach to countering deepfakes and has warned of potential consequences for platforms that fail to address the issue.
Aparajita Bharti, co-founder at The Quantum Hub, an India-based public policy consulting firm, emphasized the need for limits on AI models to prevent the creation of explicit content that causes harm:
“Generative AI’s main risk is that the volume of such content would increase because it is easy to generate such content and with a high degree of sophistication. Therefore, we need to first prevent the creation of such content by training AI models to limit output in case the intention to harm someone is already clear. We should also introduce default labeling for easy detection as well.”
Meta’s Response and Next Steps
Meta stated that it took down both pieces of content but did not address its failure to remove the content on Instagram after initial user reports or the duration for which the content remained on the platform. The company uses a mix of artificial intelligence and human review to detect sexually suggestive content and does not recommend this kind of content in places like Instagram Explore or Reels recommendations.
The Oversight Board has sought public comments on the matter, addressing harms by deepfake porn, contextual information about the proliferation of such content in regions like the U.S. and India, and possible pitfalls of Meta’s approach in detecting AI-generated explicit imagery. The board will investigate the cases and public comments and post its decision in a few weeks.
These cases highlight the challenges large platforms face in adapting their moderation processes to keep up with the rapid advancements in AI-powered content creation and distribution tools. While companies like Meta are experimenting with AI for content generation and detection, perpetrators continue to find ways to escape these detection systems and post problematic content on social platforms.
2 Comments
Well, isn’t that a whole new level of digital pandemonium? Let’s see how they navigate this minefield.
So Meta’s finally caught in the AI controversy web; wonder if they’ll cut through or get more tangled.