A new survey reveals that 74% of fans feel AI shouldn't be used to mimic musicians' voices without their permission and believe restrictions should be put in place on using AI in music.
For their Global Music Report, The IFPI collected responses from over 40,000 music fans from 26 countries. The key findings reveal that 74% of the respondents agree that AI shouldn't be used for replicating artists' voices without their consent, while 76% believe AI shouldn't be allowed to use copyright music for training without permission.
Earlier this year, another study (conducted by Pirate, the private music studio house) uncovered that 53% of musicians have "concerns about how their audience might perceive music created with the assistance of AI," while a further 48% of respondents are ready to be transparent about AI use, and as much as 25% have experience leveraging this technology in music production.
This study was published just in time, for plenty of AI-generated songs that mimic big pop stars appeared this year. A track made by an anonymous TikTok creator that replicates Drake's and The Weekend's voices and a "fake Taylor Swift" track made by a UK-based songwriter are among the stories that gained the biggest media coverage.
But many industry players embrace AI instead of combating it, probably assuming that the battle is lost already. As such, YouTube partnered with Universal Music and created YouTube AI Incubator which allows creators to legally generate voices with AI using artists’ vocals as long as they pay the rightful copyright owners. The Recording Academy, in turn, published a statement that says AI-generated music is now eligible for the Grammys.
Check out the IFPI’s Global Music Report here.