[vc_row][vc_column][vc_column_text dp_text_size=”size-4″]Generative AI’s ease of spreading misinformation online has prompted YouTube to introduce a new policy aimed at minimizing its impact. Effective next month, content creators on Google’s video-sharing platform are now required to disclose the presence of AI-generated content in their videos to avoid demonetization.
Under the updated guidelines, YouTube creators must reveal whether their videos incorporate realistic-looking content altered or synthesized through artificial intelligence. This disclosure is essential for videos containing AI-generated images or content, such as deepfakes, to prevent potential misrepresentation. The primary objective is to prevent creators from exploiting the capabilities of AI.
Read more : 10 Best Content Ideas For YouTube Videos
Failure to disclose AI-generated content may result in demonetization, video removal, account suspension for repeat offenses, and potential exclusion from the YouTube Partner Program, among other penalties.
Videos featuring generative AI content will display a prompt labeling them as “Altered or synthetic content. Sounds or visuals were altered or generated digitally.”
YouTube underscores that the label alone may not be sufficient, and videos violating community guidelines, irrespective of disclosure, may be removed.
Additionally, content created using YouTube’s generative AI products and features will carry a specific label. Users can request the removal of inaccurate AI-generated content on YouTube, especially if it features an identifiable individual based on face or voice.
Music partners will also have the option to request the removal of AI-generated music content.[/vc_column_text][/vc_column][/vc_row]