Skip to main content
ukiyo journal - 日本と世界をつなぐ新しいニュースメディア Logo
  • All Articles
  • 🗒️ Register
  • 🔑 Login
    • 日本語
    • 中文
    • Español
    • Français
    • 한국어
    • Deutsch
    • ภาษาไทย
    • हिंदी
Cookie Usage

We use cookies to improve our services and optimize user experience. Privacy Policy and Cookie Policy for more information.

Cookie Settings

You can configure detailed settings for cookie usage.

Essential Cookies

Cookies necessary for basic site functionality. These cannot be disabled.

Analytics Cookies

Cookies used to analyze site usage and improve our services.

Marketing Cookies

Cookies used to display personalized advertisements.

Functional Cookies

Cookies that provide functionality such as user settings and language selection.

YouTube's New AI Tool Fights Deepfakes! What AI Likeness Detection Changes: The Day YouTube Delves into Facial "Copyright"

YouTube's New AI Tool Fights Deepfakes! What AI Likeness Detection Changes: The Day YouTube Delves into Facial "Copyright"

2025年10月23日 00:31

Introduction: From Labels to "Detection"

How do video platforms in the AI era protect creators' "trust"? YouTube's newly introduced "Likeness Detection" system identifies AI-generated "someone who looks just like you" and allows the actual person to report and request removal seamlessly. On the first day of implementation, emails are sent to targeted creators, and a "Content detection" list marked "to be confirmed" appears in YouTube Studio—ushering in a new routine. Announced on October 21, 2025, U.S. time, it is initially distributed to some participants of the Partner Program and will expand over the coming months.The Verge


What is the Feature?: Verification → Detection → Claim

The core usage is straightforward.

  1. Creators complete identity verification with a government-issued ID and a short video selfie (which may take a few days).

  2. YouTube scans new uploads based on this facial template and extracts videos with potential likeness.

  3. Creators review in Studio, and if they determine it to be unauthorized AI-generated use, they proceed to a privacy claim (removal request)—this is the flow. The implementation is similar to "Content ID," but the target is not copyrighted material but "individual faces." YouTube itself warns that during the beta phase, there may be false detections where actual videos of the person are mixed in.Google Support


Still Focused on "Faces." What About Voices?

The FAQ clearly states that "at this stage, the focus is on visual (faces), and voice clone detection is not included." Comprehensive detection including voices is the "next assignment."Google Support


Timeline: From the Pilot with CAA

In December 2024, YouTube partnered with Creative Artists Agency (CAA) to begin early testing with prominent talents. Based on the insights gained here, they managed the initial rollout in October 2025.blog.youtube


Relation to Existing "AI Labels"

In March 2024, YouTube introduced self-reported labels for AI-generated and modified content. Additionally, the platform may apply labels as needed—this is the "display" phase. The current likeness detection moves beyond to the "discovery and action" phase, addressing deep fakes circulating without the person's awareness.The Verge


Perspective of Comparison with Other Companies and Platforms

According to verification reports from the same period, many major social networks were not adequately displaying AI videos with embedded metadata like C2PA. In contrast, YouTube had partially addressed this with displays in the description section, but the lack of visibility was noted as a challenge. Thus, only with the addition of "detection and claims" does the "protection" deepen.The Washington Post


Voices of Creators: Capturing Initial Reactions on Social Media

Shortly after launch, the following range of reactions was observed on social media (summary).

  • Supporters: "I've been waiting for a 'face version' of Content ID," "It's reassuring to be able to search for fake videos that pose a risk of backlash"—news outlets also praised the point that "individuals can scale their response."MacRumors

  • Cautious Voices: "Submitting government ID and video selfies is a high hurdle," "If there are false detections in beta, operational burden is concerning," "Voice impersonation remains"—the help documentation itself reflects the trade-off between privacy and operational burden.Google Support

  • Realists: "It's a step forward from relying on labels, but ultimately the platform makes the removal decision. Drawing lines for objections and exceptions (such as parodies) is difficult"—concerns about the "effectiveness of display" that have persisted since the introduction of AI labels have resurfaced.The Verge

In communities like Reddit, there has been a shared sense of aversion to AI-generated content and fatigue over the "AI-ification of timelines," with voices seeing this as "at least a step forward in countermeasures" coexisting with those viewing it as "an endless cat-and-mouse game."Reddit


Specific Workflow: How It Works on Your Channel

  • Initial Setup: Studio → Content detection → Likeness → "Start." After agreeing, verify identity with ID and video selfie. Completion may take up to several days.Google Support

  • Detection Review: Sort and review by view count and channel size. If you determine it to be "unauthorized AI-generated use," proceed to a privacy claim. If it's a reuse of copyrighted material, choose a copyright claim.Google Support

  • Exceptions to Remember: There may be cases where removal does not occur, such as parodies, satire, or when AI disclosure is present.Google Support


Benefits and Limitations: Three Evaluation Axes

① Scale: With cross-scan capabilities using facial templates, it can reach fake videos that the person themselves could not find. The "platform-wide search" akin to Content ID is indeed powerful.The Verge


② Rationalization: Detection → Review → Claim is integrated into Studio, creating a pathway for individuals with limited legal resources to act.Google Support


③ Risk: Concerns remain about false detections (hits on actual videos) and data use and storage associated with template creation. Although data is stored internally with consent, details of operation, such as retention period and use for model improvement, should always be monitored.Google Support


Towards a Redefinition of "Rights"

This is an attempt to delve into "protection of personality" as an extension of copyright protection. AI-generated "likeness fabrication" cannot avoid conflict with freedom of expression and parody. The phased implementation that YouTube advanced with CAA is a practical solution that first mitigates harm to celebrities and then extends to general creators. However, ultimately, cross-industry interoperability (such as C2PA or invisible watermarks) and "portability of removal requests" across platforms will be necessary.blog.youtube


Conclusion: The Next Steps are "Voice" and "Visualization"

The first step has been taken. Next is the detection of voice impersonation and the improvement of "visible labels" that are easy for users to understand. As long as labels are overlooked, fake videos will spread as if they were true. How well detection and visualization can be refined will determine trust in the AI era.Google Support



Reference Articles

YouTube's AI "Likeness Detection" Tool is Hunting Deepfakes of Popular Creators
Source: https://www.theverge.com/news/803818/youtube-ai-likeness-detection-deepfake

← Back to Article List

Contact |  Terms of Service |  Privacy Policy |  Cookie Policy |  Cookie Settings

© Copyright ukiyo journal - 日本と世界をつなぐ新しいニュースメディア All rights reserved.