YouTube Is Trying to Root Out Deepfakes With a New Program.

YouTube will soon become less fraudulent. Today, the video hosting site unveiled image recognition technology designed to identify AI-generated content that contains fake faces and voices of YouTube creators. This program is currently only available to creators who meet the requirements of the YouTube Partner Program.

Creators interested in the program upload a photo and voice recording with identification verification. They can then review detected videos and request their removal using YouTube’s privacy policy or a copyright infringement request. There’s also the option to archive videos to prevent stealth deletion.

In the short term, this is unlikely to put an end to the growing flow of videos featuring influencers promoting products or ideas they’ve never heard of. But thanks to new AI tools that can create realistic video fakes in minutes, this type of protection may soon become available to everyone.

You may also like

YouTube’s Larger AI Management Program

The image recognition program is part of YouTube’s broader efforts to combat the overabundance of AI-generated content on its site. Earlier this year, the company began requiring video creators to label AI-powered videos as “realistic” and updated its monetization policy, reducing revenue from low-effort, inauthentic content, which is often generated using AI.

What do you think at the moment?

The potential dangers of large-scale identity verification

Of course, verifying your identity isn’t without risk. Verifying your identity with any company requires uploading a driver’s license, passport, or other official ID, as well as providing biometric data, and tech companies often fail to protect this sensitive information from attackers. Here are just a few recent examples:

YouTube’s new system may combat deepfakes and make the platform less spammy, but it also adds to the growing library of personal data that people trust tech companies to protect.

More…

Leave a Reply