YouTube cracks down on synthetic media with AI disclosure requirement
On Tuesday, YouTube announced it will soon implement stricter measures on realistic AI-generated content hosted by the service. “We’ll require creators to disclose when they’ve created altered or synthetic content that is realistic, including using AI tools,” the company wrote in a statement. The changes will roll out over the coming months and into next year.
The move by YouTube comes as part of a series of efforts by the platform to address challenges posed by generative AI in content creation, including deepfakes, voice cloning, and disinformation. When creators upload content, YouTube will provide new options to indicate if the content includes realistic AI-generated or AI-altered material. “For example, this could be an AI-generated video that realistically depicts an event that never happened, or content showing someone saying or doing something they didn’t actually do,” YouTube writes.
In the detailed announcement, Jennifer Flannery O’Connor and Emily Moxley, vice presidents of product management at YouTube, explained that the policy update aims to maintain a positive ecosystem in the face of generative AI. “We believe it’s in everyone’s interest to maintain a healthy ecosystem of information on YouTube,” they write. “We have long-standing policies that prohibit technically manipulated content that misleads viewers … However, AI’s powerful new forms of storytelling can also be used to generate content that has the potential to mislead viewers—particularly if they’re unaware that the video has been altered or is synthetically created.”