YouTube takes measures to protect celebrities from deepfakes

Technology|22/4/2026
YouTube takes measures to protect celebrities from deepfakes
YouTube platform
Listen to this story:
0:00

Note: AI technology was used to generate this article's audio.

The step aims to strengthen digital protection tools through “similarity detection” The technology allows rights holders to request content removal or share in its revenue

YouTube has announced a new move to enhance digital protection tools, revealing an expansion of its “similarity detection” technology, which relies on artificial intelligence to identify AI-generated content such as deepfakes, extending it to a broader group of people in the entertainment industry, including celebrities, talent agencies, and management companies.

The technology works similarly to YouTube’s existing Content ID system, which is used to detect copyrighted material within uploaded videos, allowing rights holders to request removal or monetize the content. However, “similarity detection” focuses on a different aspect, targeting synthetic faces and voices in order to prevent the unauthorized use of public figures’ identities.

AI-based technology According to the platform, the system scans AI-generated videos to detect visual matches with registered individuals’ faces. After results appear, users can take several actions, including requesting removal for privacy violations, filing a copyright-based takedown request, or choosing to take no action.

The move comes in response to growing concerns over the misuse of AI technologies, especially regarding celebrities whose images are frequently used in misleading content or fraudulent advertisements without consent.

The platform also stated that not all content will be automatically removed, as satirical and parody content will remain allowed under its policies. It also noted that the technology will eventually expand to include voice recognition, not just facial detection.

In the same context, YouTube called for broader federal-level legal frameworks in the United States, expressing support for the proposed NO FAKES Act in Washington, which aims to regulate the use of AI in creating unauthorized replicas of individuals’ voices and likenesses, thereby strengthening legal protection against digital impersonation.