Google, in collaboration with YouTube, has unveiled its latest AI music generation model, Lyria, along with two AI experiments aimed at "transforming the music industry." This development addresses the challenge of AI systems creating compelling and intricate music compositions.
As per Google, Lyria, developed by Google DeepMind, stands out for its ability to generate high-quality music, incorporating instrumentals and vocals while maintaining continuity across different sections. The model also offers users enhanced control over the output's style and performance, making it an advancement in the field of AI-generated music.
One of the experiments, Dream Track, about which we reported earlier, is a YouTube Shorts initiative designed to foster deeper connections between artists, creators, and fans through music creation. Partnering with artists such as Alec Benjamin, Charlie Puth, and Demi Lovato, a select group of creators will use Lyria to produce unique soundtracks with AI-generated voices and musical styles. The experiment aims to test new ways for artists to engage with their audience and shape the future of AI in music.
"Dream Track users can simply enter a topic and choose an artist from the carousel to generate a 30 second soundtrack for their Short. Using our Lyria model, Dream Track simultaneously generates the lyrics, backing track, and AI-generated voice in the style of the participating artist selected." says Google in their press release.
Additionally, Google is working on a set of Music AI tools in collaboration with artists, songwriters, and producers to let users create new music from scratch, transform audio styles or instruments, and develop backtracks and vocal accompaniments.
The AI experiments align with YouTube's AI principles, emphasizing creative expression while safeguarding the artists' rights.