YouTube is expanding its AI deepfake detection tool to all adult users

18 hours ago 4

Mia Sato

is features writer with five years of experience covering the companies that shape technology and the people who use their tools.

YouTube is expanding its AI likeness detection program to all users over the age of 18 — meaning just about anyone can have the platform hunt for potential deepfakes of themselves.

The likeness detection feature uses a selfie-style scan of a person’s face to monitor YouTube for lookalikes. If there is a match, YouTube alerts the user; the person then has the option to request that YouTube remove the content. YouTube has said in the past that it has found the number of removal requests to be “very small.”

YouTube began testing the feature with content creators, and then expanded it to government officials, politicians, journalists, and finally the entertainment industry. The expansion to any user 18 years or older is a significant shift — it essentially gives the average person the ability to constantly monitor content on YouTube that could use their likeness. Takedown requests are evaluated using YouTube’s privacy policy, and the company says it considers removals based on criteria like whether the content is realistic, is labeled as AI-generated, and if a person can be uniquely identified. There are carveouts for things like parody or satire, and the tool only covers facial likeness, not other identifying features like a person’s voice. Users can withdraw from the program and have YouTube delete their data.

The news was announced on YouTube’s creator forum, but spokesperson Jack Malon says there are no requirements on what constitutes a “creator” who is eligible.

“With this expansion, we’re making clear that whether creators have been uploading to YouTube for a decade or are just starting, they’ll have access to the same level of protection,” Malon said in an email.

Deepfake content often centers on celebrities, politicians, or other public figures, but the ability to create a convincing digital replica is a concern for private citizens, too. There have been instances of teenagers being deepfaked by classmates, and three teenagers sued xAI alleging that the company’s Grok chatbot generated child sexual abuse material (CSAM) of them.

Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates.

Read Entire Article