YouTube expands AI Deepfake detection tool to politicians, won’t say if Trump is included



Ahead of this year’s midterm elections, YouTube is making it easier to remove AI deepfakes by politicians and journalists from its platform. But now it is silent on who has access to this tool.

The video streaming giant announced today that it is expanding access to its lookalike detection tool to journalists, government officials and political candidates. The tool flags videos that show a user’s likeness in AI-generated content and allows them to request the removal of unauthorized videos.

“YouTube is where the world understands the events that shape their lives—from breaking news to debates that spark civil discourse,” YouTube vice president of creative products Amjad Hanif and vice president of government affairs and public policy Leslie Miller said in a study. blog post. “As AI-generated content evolves, the individuals at the center of these conversations need reliable tools to protect their identities.”

The expansion comes at a time when AI deepfakes are becoming increasingly impressive and raising concerns about their potential to spread disinformation, particularly around elections. The news also comes as YouTube continues to grow leans more towards artificial intelligence.

Last year, the company brought a custom version of Google’s video creation model, Veo 3, to Shorts — a fast, vertical video feed similar to YouTube’s TikTok and Instagram Reels. This tool, along with other AI editing features on the platform, has made it easier than ever for users to create deep fakes. At the same time, YouTube has tried to provide tools to mitigate the risks.

The company’s similarity detection tool works similarly to Content ID, YouTube’s copyright tagging system, but for human faces. YouTube first started testing the system in 2024 celebrities and athletes, and last year expanded it to the company’s YouTube creators Affiliate Program.

To register for the program, eligible users must verify their identity by providing a video selfie and government ID. Any data provided will only be used for verification purposes, not to train Google’s AI, the company said.

Once approved, users can review videos that use their likeness and request their removal. YouTube, however, emphasizes that just because a video is detected and requested to be removed does not guarantee its removal.

“YouTube has a long history of protecting free expression and content in the public interest, including protecting content such as parody and satire, even when it is used to criticize world leaders or figures of authority,” the company said in a blog post. “We will continue to carefully evaluate these exceptions when we receive requests for removal.”

A YouTube spokesperson told Gizmodo that the company plans a “broad international rollout” and that access to the tool will be expanded in the coming weeks and months.

YouTube declined to comment on which politicians and journalists were included in the initial pilot group, including whether US President Donald Trump was invited. Trump himself and his administration have been known to post AI-generated content using his likenesses political and media rivals.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *