YouTube will warn civic leaders and reporters about the deep fakes that are their likeness


Creating videos with artificial intelligence is a real concern. Even in an age where AI videos tend to have Coca Cola’s game-giving attributes a fast turning semi truck in last year’s Christmas ad – is often good enough to fool viewers, so some platform owners are trying to get ahead of potentially problematic deepfakes.

today, YouTube is expanding its similarity detection supporting politicians, government officials, political candidates and reporters. The company previously rolled out the feature to YouTube partners, but now those in these new protected categories won’t need to be in the program to participate. As with Content ID, YouTube’s similarity detection works to find facial matches in AI-generated content on the platform before allowing an eligible participant to submit a takedown request for that particular video.

YouTube says it doesn’t automatically pull all matching content, even with special carves for parody and satire against world leaders, but looks for anything that violates its pre-existing privacy rules. Those who qualify for the program must verify their identity with Google, though the company says that data is not used to train AI models. YouTube is also using the announcement to urge Congress to pass legislation that would “establish a federal right of publicity and act as a blueprint for international adoption to ensure that technology serves, never replaces, human creativity.”

Unfortunately, similarity detection isn’t currently available unless you’re in YouTube’s Partner Program or one of these supported public roles.

More on YouTube:

FTC: We use automatic affiliate links that generate income. More.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *