Meta’s Oversight Board Investigates AI-Generatedâ Explicit Images of Public Figures
The Oversight Board, â˘Meta’s⣠semi-independent policy council, is turning its attention to how the company’s social âŁplatforms are handling explicit, AI-generated images. The board announced investigations into two separate cases over how Instagram in âIndia and Facebook in the U.S. handled âAI-generated images of public figures after Meta’s systems fell short on detecting and responding to âthe explicit content.
The Cases Under Investigation
In the first âcase, a user reported an⢠AI-generated nude of a public figure⤠from India on Instagram as âpornography. The image was posted byâ an âaccount that exclusively posts AI-created images of Indian women, andâ the majority of users who reactâ to these images are based in India. Meta failed to takeâ down the image after two reports, and the explicit AI-generated image remained on Instagram until the user appealed to the Oversight Board.
The second case relates toâ Facebook, âwhere a user posted an explicit, AI-generated image that resembled a U.S. public figure in a Group focusing on â¤AI creations.⤠In this case, the social network took down the⤠image as it was posted by another user earlier, and Meta had added it to a Media Matching Service Bank under the “derogatory âsexualized photoshop or drawings” category.
Oversight Board Co-Chair Helle Thorning-Schmidt stated:
“We knowâ that Meta is quicker and more effective at moderating content in âsome markets and languages than others. âBy taking one case from the US and one from â¤India, we want to look at â˘whether âMeta is protectingâ all women globally in a fair way.”
The âŁGrowing Concern of Deepfake Pornâ andâ Online Gender-Based Violence
Some generative AI tools have âexpanded to allow users to generate porn, raising ethical concerns and issues of bias âin data. In regions like India, deepfakes have become a significant concern, âwith women being more commonly subjected to deepfaked videos. Deputy IT Minister Rajeev Chandrasekhar has expressed dissatisfaction with â¤tech companies’ approach âto countering deepfakes and has warned of potentialâ consequences for platforms⢠that fail to addressâ the issue.
Aparajita Bharti, co-founder at The Quantum Hub, â˘an India-based public policy consulting firm, emphasized the need for limits⢠on AI models⣠to prevent the creation of explicit content that causes harm:
“Generative AI’s main risk is that âthe volume of such content would increase because it is easy âto âgenerate suchâ content and with âa highâ degree of sophistication. Therefore, we need to first prevent the creation ofâ such content by training⢠AI models to limit outputâ in case the intention to harm someone is already clear. We should also introduceâ default labeling for easy detection as well.”
Meta’s Response and Next Steps
Meta stated that â˘it took down both pieces of content but â˘did notâ address its failure to remove the content on Instagram after initial user reports or the duration for which theâ content remained on the platform. The⢠company uses a mix of artificial intelligence and human review to detect sexually suggestive content and does not recommend this kind of content in places like Instagram Explore⢠or Reels recommendations.
The Oversight Board has sought public comments on the matter,⤠addressing harms⣠by â˘deepfake porn, contextual information â˘about the proliferation of such content in regions like the U.S. âand India, and possible pitfalls of Meta’s approach in detecting AI-generated explicit imagery. The board will investigate the cases and public comments âand post its decision in a â¤few weeks.
These cases highlight the challenges⢠large⣠platforms face in adapting their moderation processes â˘to keep up with âthe â˘rapidâ advancements in â˘AI-powered content⤠creation and distribution tools. While companies likeâ Meta areâ experimenting with AI for content generation and detection, perpetrators continue to find ways to escape these detection systems and post problematic content on social platforms.
2 Comments
Well, isn’t that a whole new level of digital pandemonium? Let’s see how they navigate this minefield.
So Meta’s finally caught in the AI controversy web; wonder if they’ll cut through or get more tangled.