When AI tools first began proliferating around the web, worries about deepfakes quickly rose alongside them. And now that tech such as OpenAI's recently released Sora 2 is getting more capable and more widely available (and being used exactly as irresponsibly as you might have guessed), both famous and ordinary people may want more control over protecting their likenesses. After teasing the feature last year, YouTube is starting to launch a likeness detection tool to combat unwanted deepfakes and have them removed from the video platform.
Likeness detection is currently being rolled out to members of the YouTube Partner Program. It's also only able to cover instances where an individual's face has been modified with AI; cases where a person's voice has been changed by AI without their consent may not be caught by this feature. To participate, people will need to submit a government ID and a brief video selfie to YouTube to ensure they are who they say they are and give the feature source material to draw from in its review. From there, it works similarly to YouTube's Content ID feature for finding copyrighted audio, scanning uploaded videos for possible matches that the person can then review and flag infringing videos for removal.