YouTube Introduces New Policy for AI-Generated Content Disclosure
In an effort to combat the growing concern of AI-generated content being mistaken for real footage, YouTube has announced a new policy requiring creators to disclose when their videos contain realistic content created using artificial intelligence. The platform is rolling out a tool in Creator Studio that will prompt creators to indicate if their content could be confused with genuine footage of people, places, or events, despite being synthetically generated or altered using AI technology.
Preventing Viewer Deception in the Era of Generative AI
As generative AI tools become more advanced, it is increasingly difficult for viewers to distinguish between authentic and artificially created content. This new policy aims to ensure that users are not misled into believing that synthetically generated videos are real, especially in light of the potential risks posed by AI and deepfakes during the upcoming U.S. presidential election, as cautioned by experts.
Scope of the New Disclosure Policy
The new disclosure requirement does not apply to content that is clearly unrealistic or animated, such as a video depicting a person riding a mythical creature through an imaginary world. Additionally, creators are not obligated to disclose the use of generative AI for production assistance, such as generating scripts or automatic captions.
The policy focuses on videos that utilize the likeness of a realistic person, such as digitally replacing one individual’s face with another’s or synthetically generating a person’s voice for narration. Creators must also disclose content that alters footage of real events or places, like depicting a real building on fire, or generating realistic scenes of fictional major events, such as a tornado approaching a real town.
Implementation and Enforcement
In the coming weeks, viewers will begin to see disclosure labels across all YouTube formats, starting with the mobile app and eventually expanding to desktop and TV. For most videos, the label will appear in the expanded description, while content touching on sensitive topics like health or news will feature a more prominent label on the video itself.
YouTube plans to consider enforcement measures for creators who consistently fail to use the required labels. In some cases, especially when the content has the potential to confuse or mislead viewers, the company will add labels even if the creator has not done so themselves.
Today’s announcement comes as YouTube announced back in November that it was going to roll out the update as part of a larger introduction of new AI policies.
As the use of generative AI continues to grow, YouTube’s new disclosure policy represents a proactive step in ensuring transparency and maintaining viewer trust in an increasingly complex digital landscape.
1 Comment
Is this the end of authentic creativity? YouTube’s new rule on AI content sparks heated debates!