YouTube's New Tools to Help Creators Detect Deepfakes Using Their Faces

Key points

  • YouTube is working on new tools to protect creators and their identities.
  • A new technology will help creators identify deepfakes using their faces and voices.
  • YouTube is rolling out AI technology to detect unauthorized content and manage its use.

  • YouTube's New Tools to Help Creators Detect Deepfakes Using Their Faces

    YouTube recently announced that it is developing new tools to protect creators, artists, and other public figures from deepfakes. One of the key features being worked on is called “likeness management technology,” which aims to help people spot AI-generated content that uses their faces or voices without permission.

    This new tool is designed for anyone, including creators, actors, musicians, and athletes, to detect deepfake content on YouTube and manage its presence. If deepfakes are found, creators can request the removal of this content from the platform. This development comes after YouTube updated its policies in July, allowing users to report AI-generated content that mimics their voice or face without their consent.

    YouTube stressed that accessing and using creator content without permission goes against its terms of service. The company also emphasized that all AI-generated content must follow its Community Guidelines, ensuring that creators' identities are not misused.


    YouTube’s Fight Against Deepfakes 

    In addition to deepfake detection, YouTube is enhancing its Content ID system. A new feature, called synthetic-singing identification technology, will help musicians and artists identify AI-generated content that mimics their singing voices. YouTube is working closely with its partners to refine this technology, and a pilot program is expected to launch next year.

    Though no specific launch date for the deepfake detection tools has been announced yet, YouTube’s efforts are a promising step forward in tackling the growing problem of deepfakes online.