YouTube introduces a tool to combat deepfakes involving people’s appearances
YouTube has started testing a tool that will help combat deepfakes using people’s appearances without their consent. This step comes in response to growing concerns about artificial intelligence capable of generating realistic videos.
The new tool is part of the YouTube Partner Program. It functions similarly to the Content ID system by checking uploaded videos for matches with source material provided by the user for verification. Program participants must provide government-issued identification and a video selfie to confirm their identity. Currently, the feature can only detect cases of facial modification using AI, but not voice alteration. This tool allows users to flag videos that violate their rights for subsequent removal from the platform.
YouTube’s initiative coincides with OpenAI’s policy update, which, under public pressure, changed the rules for using its video editor Sora 2. It is now prohibited to use the likeness of real people without their consent. Renowned actor Bryan Cranston, through the Screen Actors Guild (SAG-AFTRA), commended OpenAI for this decision, emphasizing the importance of protecting performers’ identities.
Previously, the company Meta used the images of famous people such as Taylor Swift and Scarlett Johansson in creating chatbots without the consent of these stars, causing public outrage.
| Company | Tool | Function |
|---|---|---|
| YouTube | Similarity detection tool | Detection of deepfakes |
| OpenAI | Sora 2 | Updated policy prohibits using likeness without consent |
| Meta | AI chatbots | Using images without permission |




