YouTube is developing tools to help creators detect and manage AI-generated content that uses their face or voice without permission. The company announced that these features, aimed at protecting artists, creators, and public figures, will begin pilot programs early next year.
YouTube’s vice president of creator products, Amjad Hanif, said in a blog post, “As AI develops, we believe it should enhance human creativity, not replace it. [We're] equipping them with the tools they need to harness AI's creative potential while maintaining control over how their likeness, including their face and voice, is represented.”
One of the tools under development is face-detection technology, which will allow people from various industries to monitor and control deepfake videos that feature AI-generated versions of their faces. YouTube has not provided a specific release date for this tool but confirmed its commitment to safeguarding creators' rights.
"We’re actively developing new technology that will enable people from a variety of industries—from creators and actors to musicians and athletes—to detect and manage AI-generated content showing their faces on YouTube. Together with our recent privacy updates, this will create a robust set of tools to manage how AI is used to depict people on YouTube," the blog post says.
YouTube is also expanding its Content ID system with a new feature called "synthetic-singing identification" in order to allow music industry partners to detect AI-generated versions of artists’ singing voices and decide on appropriate actions.
🍿🍿 Read also: