- YouTube’s likeness detection tool is now open to celebrities and their representatives at agencies including CAA, UTA, WME, and Untitled Management, each of which provided feedback during development.
- The system scans uploaded videos for AI-generated visual matches of enrolled individuals’ faces and offers three response options: privacy policy removal, copyright removal, or no action.
- Enrollment does not require celebrities to have their own YouTube channels; audio detection is planned for a future release with no timeline specified.
- YouTube noted in March 2026 that the total volume of deepfake removals handled by the tool remained “very small.”
What Happened
YouTube announced on April 21, 2026 that its AI-powered likeness detection system is now accessible to the entertainment industry, giving celebrities and their representatives at talent agencies and management companies a mechanism to identify and request removal of AI-generated deepfakes. The expansion brings four major Hollywood representation firms—CAA, UTA, WME, and Untitled Management—into the program; each contributed feedback during the feature’s development. The announcement was reported by TechCrunch.
Why It Matters
AI-generated reproductions of celebrities’ faces—deployed in particular in unauthorized scam advertisements—have become a documented enforcement challenge across major video platforms. YouTube’s likeness detection tool addresses this by applying an enforcement architecture comparable to its Content ID copyright system, which has managed rights claims at scale since its 2007 launch. YouTube is also supporting the NO FAKES Act in Washington D.C., proposed federal legislation that would establish legal liability for unauthorized AI-generated recreations of an individual’s voice or visual likeness.
Technical Details
The likeness detection system uses computer vision to scan user-uploaded videos for visual matches against enrolled faces. When a potential match is identified, the enrolled participant or their agency representative can select from three responses: request removal under YouTube’s privacy policy, submit a formal copyright removal request, or take no action. YouTube’s platform rules explicitly permit parody and satire, meaning the tool does not automatically generate takedowns for all detected matches. As of March 2026, YouTube described the total volume of deepfake removals attributed to the tool as “very small,” without disclosing a specific count. Audio detection is planned as a future addition, with no timeline provided.
Who’s Affected
Celebrities and the agencies representing them are the primary users of the expanded capability. YouTube specified that enrollment does not require individuals to maintain their own YouTube channels, extending access to the broader talent represented by agencies rather than limiting it to active platform creators. CAA, UTA, WME, and Untitled Management are the four agencies named as initial launch partners. Creators producing parody or fan content featuring celebrity likenesses retain protection under YouTube’s existing satire exemption.
What’s Next
YouTube has committed to adding audio detection to the likeness system, which would extend coverage to AI-synthesized voice cloning—a capability absent from the current release. The platform’s concurrent support for the NO FAKES Act indicates YouTube is pursuing a federal regulatory framework alongside its technical enforcement tools. The current expansion follows a phased rollout that began with a creator pilot, broadened to politicians, government officials, and journalists earlier in 2026, and now includes the entertainment industry.