On one hand, it’s Grimes who wants to be ‘less famous’ and has even created a platform, Elf.Tech, that lets her fans make songs using the AI-generated stems of her voice.
And on the other is Drake and The Weeknd’s case with a song that was allegedly performed by the artists and uploaded to all streaming platforms but removed later. It became obvious that the song had been made by AI.
AI-generated music isn’t a phenomenon of this year. Though in 2023, it gained momentum. Back in 2020, for instance, a composition generated by AI was played in a concert of the London Symphony Orchestra, so the attitude towards AI-made music is very diverse.
From the example with Grimes’s Elf.Tech, we see that not all musicians are concerned with AI taking over music. Some embrace it, while others surely don’t. But to figure out if AI-generated music actually is a threat, let’s briefly define what music made by artificial intelligence means.
What is AI-generated music?
Music is AI-generated when it's composed or produced by artificial intelligence algorithms. Essentially, it's music that has been created by a machine rather than a human.
AI algorithms are typically trained on large datasets of music (often protected by copyright) that include a broad range of musical styles, genres, and compositions. Artificial intelligence is programmed to analyse these datasets to identify patterns, structures, rhythms, melodies, harmonics, and other musical elements. Some AI systems are even taught to understand the theory of music.
Once the AI has been sufficiently trained, it can generate new music. AI creates or predicts a sequence of musical notes that follow the patterns and styles that it learned during training.
Services that allow for generating music with AI
No wonder that tech giants like Meta or Google are among the first companies to train and release services that can generate music. But smaller startups don’t fall far behind.
Here are a few tools you can play with to generate your own melody or song:
- AudioCraft by Meta is considered the best AI music generator presently existing on the market and is taken as a benchmark by other developers.
AudioCraft has multiple models which are trained and grow as we speak. Here are a few: AudioGen is focused on text-to-sound generation and learned to produce audio from environmental sounds.
MusicGen produces diverse and long music samples from user-provided text inputs.
- MusicLM by Google is yet another service that can generate new music with simple prompts. Users can create two versions of a song and vote on which they like better, thus allowing the model to refine. It should be noted that specific queries that mention artists or include vocals will not be produced. At the same time, Google gives a hand in the war against AI with its YouTube incubator.
- Songburst just wants you to type a prompt and is ready to give you the tune you set it to. If you can't think of a prompt, the app has prompts in different categories, including video, lo-fi, podcast, gaming, meditation, and sample.
- Voicify AI can create covers with your favourite voices. You simply drop a song, and it will create a cover song for you. Here's what the app developers say if you want to, say, create a cover on the Weekend.
In order to make AI The Weeknd covers, you need to either drop your original vocals/song into the box or use a YouTube link. Once you’ve done so, agree to our Terms of Use and press ready to convert! Your AI The Weeknd cover will be ready in around 30-60 seconds.
Does AI-generated music have copyright?
At the moment of this text being written, the legal side of AI-generated music is still obscure: If you create music with AI, you may not be the owner of the generated track. But who is? AI developers, artists whose songs were used to train AI, or those who type the prompt—that is so far unclear. And if AI-generated visual art is stated that it isn’t copyrighted under U.S. law, if that applies to music is yet unknown. Another question is if AI-generated music even requires a licence to use.
Musicians whose art is used for AI training may not know that their music has been leveraged for this purpose and thus are unable to sue AI developers for copyright infringements. The artists whose music is used as input aren’t compensated for the usage of their copyrighted tracks and can’t sometimes even prove their music has actually been used to train AI or generate the output.
In response to these threats and controversies, Google and Universal Music are going to embrace AI and allow content creators to legally generate voices with AI using artists’ voices as long as they pay the rightful copyright owners by introducing YouTube AI Incubator.
The incubator will help inform YouTube’s approach as we work with some of music's most innovative artists, songwriters, and producers across the industry, across a diverse range of culture, genres, and experience.
Can streaming services allow AI-generated music?
They can, but not in each case. Deezer, for instance, recently launched a new technology to detect and delete AI-made music mimicking real artists, and Spotify ejected thousands of AI-generated songs from the platform. However, now Spotify is open to artificial intelligence, allowing some AI novelities to seep into streaming.
Like in the case of Drake and the Weeknd, streaming platforms typically fall for the talented online marketing run by an anonymous creator from TikTok and remove the track as quickly as they start streaming it.
Spotify and Apple require artists to go through a distributor or have a label to upload to the platform, so uploading an AI-generated song can be challenging in most cases, but SoundCloud, for instance, may stream such music.
Examples of AI-generated songs & music
Above in this piece, we’ve already briefly touched on some of the music tracks created by AI. Let’s give you a bit more context and examples.
- Drowned In The Sun by Nirvana was created using Google’s Magenta and dozens of original Nirvana songs. The neural network also generated lyrics for the singer of a Nirvana tribute band to perform.
- Proto, 2019 Holly Herndon’s album is the ‘world’s first mainstream album made with AI’ as Vulture put it. The album used a neural network that created audio variations based on hours of vocal samples.
- Heart on my sleeve written and produced by AI and a TikTok creator ghostwriter977 (but everyone thought of Drake and The Weeknd). The TikToker made the vocals sound as they were sung by Drake and The Weeknd. The track blew it on TikTok, Spotify, Apple Music, and YouTube in hours. The platforms eventually wiped the song off, but before that, it went viral and hit 600,000 Spotify streams, 15M TikTok views, and 275,000 YouTube views. The removal reason is the artists’ record label Universal Music Group copyright claims.
Does AI-generated music threaten artists or broaden their perspectives?
New York Times recently had a study questioning teenagers if AI, in their opinion, can replace their favourite pop artists. Most of them said it was curious to listen to an AI-generated song a couple of times, but something is always missing, and AI-made songs ‘don’t have the emotional pull of music made by humans.’
There are, of course, both advantages and disadvantages of using AI in music creation. Music creation becomes quicker and easier, and those musicians who will embrace AI will release albums more productively. Using AI as a tool, not as an entire replacement, artists can invest more time in mixing, editing, and improving other aspects of their art.
What’s more, just like in the case of Grimes, AI can become a medium to build the bond between an artist and their fanbase.
Content creators, YouTubers, TikTokers, and other social media enthusiasts are, of course, the ones who profit from the generative AI the most. They now access tools that unleash their creativity to an extent no other century or technological breakthrough could offer.
The possibilities are quite clear.
The question is whether AI-generated music creates more threats than prospects. And for whom: musicians and say, bloggers? It’s both; artists, fans, and creators just need to leverage AI responsibly.
However, musicians face more threats. The majority of the music industry players are concerned more about copyright, rights protection issues, and AI learning from their copyright materials than AI mimicking someone else’s voice.
There are specific genres that might profit from AI music generation, such as soundtracks for games, movies, or background music for YouTube videos or Reels.
But make no mistake: AI-generated music will hardly ever entirely replace human musicians, the connection they build with their fans, that special vibe they give on their live gigs, and give those goosebumps when you listen to your favourite artist’s track or see their new album on Spotify.