YouTube To Require Disclosure When Videos Include Generative AI | Tech News

YouTube To Require Disclosure When Videos Include Generative AI

Site’s creators who repeatedly fail to disclose AI use will face penalties.

By:BLOOMBERG
| Updated on: Nov 15 2023, 07:29 IST
YouTube
Now YouTube videomakers will have to tell if their videos are AI generated. (AFP)
YouTube
Now YouTube videomakers will have to tell if their videos are AI generated. (AFP)

YouTube, the video platform owned by Alphabet Inc.'s Google, will soon require video makers to disclose when they've uploaded manipulated or synthetic content that looks realistic — including video that has been created using artificial intelligence tools.

The policy update, which will go into effect sometime in the new year, could apply to videos that use generative AI tools to realistically depict events that never happened, or show people saying or doing something they didn't actually do. “This is especially important in cases where the content discusses sensitive topics, such as elections, ongoing conflicts and public health crises, or public officials,” Jennifer Flannery O'Connor and Emily Moxley, YouTube vice presidents of product management, said in a company blog post Tuesday. Creators who repeatedly choose not to disclose when they've posted synthetic content may be subject to content removal, suspension from the program that allows them to earn ad revenue, or other penalties, the company said. 

We are now on WhatsApp. Click to join

When the content is digitally manipulated or generated, creators must select an option to display YouTube's new warning label in the video's description panel. For certain types of content about sensitive topics — such as elections, ongoing conflicts and public health crises — YouTube will display a label more prominently, on the video player itself. The company said it would work with creators before the policy rolls out to make sure they understood the new requirements, and is developing its own tools to detect when the rules are violated. YouTube is also committing to automatically labeling content that has been generated using its own AI tools for creators.

Google — which both makes tools that can create generative AI content and owns platforms that can distribute such content far and wide — is facing new pressure to roll out the technology responsibly. Earlier on Tuesday, Kent Walker, the company's president of legal affairs, published a company blog post laying out Google's “AI Opportunity Agenda,” a white paper with policy recommendations aimed to help governments around the world think through developments in artificial intelligence. 

“Responsibility and opportunity are two sides of the same coin,” Walker said in an interview. “It's important that even as we focus on the responsibility side of the narrative that we not lose the excitement or the optimism around what this technology will be able to do for people around the world.”

Like other user-generated media services, Google and YouTube have been under pressure to mitigate the spread of misinformation across their platforms, including lies about elections and global crises like the Covid-19 pandemic. Google has already started to grapple with concerns that generative AI could create a new wave of misinformation, announcing in September that it would require “prominent” disclosures for AI-generated election ads. Advertisers were told they must include language like, “This audio was computer generated,” or “This image does not depict real events” on altered election ads across Google's platforms. The company also said that YouTube's community guidelines, which prohibit digitally manipulated content that may pose a serious risk of public harm, already apply to all video content uploaded to the platform.Read More: Meta to Require Disclosure for Political Ads Using AI

In addition to the new generative AI disclosures YouTube plans to add on the video platform, the company said it will eventually make it possible for people to request the removal of AI-generated or synthetic content that simulates an identifiable person, using its privacy request process. A similar option will be provided for music partners to request the removal of AI-generated music content that mimics an artist's singing or rapping voice, YouTube said. 

The company said not all content would be automatically removed once a request is placed; rather, it would “consider a variety of factors when evaluating these requests.” If the removal request references video that includes parody or satire, for instance, or if the person making the request can't be uniquely identified, YouTube could decide to leave the content up on its platform.

One more thing! HT Tech is now on WhatsApp Channels! Follow us by clicking the link so you never miss any updates from the world of technology. Click here to join now!

 

Catch all the Latest Tech News, Mobile News, Laptop News, Gaming news, Wearables News , How To News, also keep up with us on Whatsapp channel,Twitter, Facebook, Google News, and Instagram. For our latest videos, subscribe to our YouTube channel.

First Published Date: 15 Nov, 07:28 IST
Tags:
NEXT ARTICLE BEGINS