Home » Entertainment » Music » Is the Music Business Too Afraid of Big, Bad AI?

Share This Post

Music

Is the Music Business Too Afraid of Big, Bad AI?

Is the Music Business Too Afraid of Big, Bad AI?

Like a lot of people, I was — and maybe still am — alarmed by the changes AI could bring to musical creativity. But after a conversation yesterday with a major executive in the music industry who has a formidable knowledge of its business, legalities and people, I’m not as sure.

This person gave a fairly controversial take on generative AI: They pretty much scoffed, saying everyone is overreacting, and predicted that it will end up like ringtones — the mid-‘00s trend of using a snippet of a song as the ringer on a cellphone, for those who don’t (or would rather not) remember. Ringtones, which are usually fully licensed by copyright owners, were a lucrative business — Lil Wayne’s “Lollipop” apparently still holds the sales record, more than 5 million sold at a retail price of $2.99 a pop — but, with rare exceptions, they became annoying a long time ago.

Of course, the possible creative uses of AI in music far exceed those of ringtones, but the real question is less what could be done with AI as opposed to why anyone would.

Popular on Variety

For example, let’s just say it would be possible to obtain the licenses for Frank Sinatra rapping a Notorious B.I.G. track or Beyonce singing a D’Angelo song. The results might be awesome — but how many times would you or anyone play it? If it were pressed onto vinyl, would you buy it? And even if a few such novelty songs went viral on TikTok — every legendary artist’s dream — how long before it just became annoying?

None of that is to say it should be legal or acceptable to replicate the sound of anyone’s voice or music without permission and compensation (except in parody). Although it is not currently possible to copyright the sound of a human voice, at least a couple of legal actions are circling closer to that concept, and lawyers, as lawyers do, have found other ways to smack down copycats.

Last year, Universal Music was able to quash “Heart on My Sleeve,” Ghostwriter’s AI-assisted song that channels Drake and the Weeknd, by arguing that the AI was trained on copyrighted material without permission from the owners. In the ‘90s, Tom Waits and Bette Midler both won lawsuits against companies that had used copycat singers in TV advertisements (after Waits and Midler had turned them down) on the basis of false advertising, which works for ads but wouldn’t for a commercially released song. But as my Ringtone-summoning friend said yesterday, it really just amounts to a virtual impersonator or cover band. (Please note: We’re talking about music here, not legitimately dangerous AI uses such as deepfakes of political leaders exhorting their followers to action or violence — not that deepfakes are necessary for that.)

AI already has been used for good in the music world — a combination of AI and a sound-alike singer has given new voice to country great Randy Travis, who lost his ability to sing after a serious stroke in 2013, and such technology will only get better. But it probably won’t be long before AI is used for less-good: It’s just a matter of time before healthy singers simply AI their own voices on new songs rather than actually singing them (if it’s not happening already), and it’s not hard to imagine producers or labels saving money by AI-ing multiple voices instead of paying backing singers — a bazillion Beyonces, an army of Adeles with just a few keystrokes — not to mention engineers and other technical staff. The real threat of AI to the music world lies in the large number of jobs that will be replaced by it, although that’s rarely what the people at the top of the industry are thundering on about.

We were already pretty far down this road even before generative AI burst into the mass consciousness with the launch of Chat GPT late in 2022. For years, bad singers have been made to sound almost good in the studio with the wonder of autotune, and at many major concerts, vocalists are singing along with prerecorded backing tracks of themselves; some lip-synch so well, and live-sound technicians are able to modulate the volume so strategically, that it takes well-trained eyes and ears to detect it. How long before we won’t be able to tell at all? For that matter, how long before ABBA’s “Abba-tar” technology becomes so realistic that it renders actual touring a wasteful indulgence?

Why stop there? Fictional, AI-created pop stars already exist, and it’s probably just a matter of time before biopics or alternate-history stories come to virtual life: Someone conceivably could feed every recording of John Lennon’s voice into AI and emerge with a convincing approximation of what a series of post-1980 albums might have sounded like; an avatar could go on tour, accompanied by live musicians, just like hologram concerts. And the day probably isn’t far away when people could use AI to create a Zoom call with historical figures or deceased loved ones. It feels gross just thinking about it, but will it always? Maybe we wouldn’t want to leave, if that fake world were nicer than the real one.

Randy Travis’ “new” song was created with his full approval and, to the degree possible, participation. Although he didn’t write the song that his AI voice appears on, presumably it’s already possible for him or any singer to write songs on a keyboard using AI facsimiles of their voices. And it’s probably just a short matter of time before we won’t know the difference.

During the pandemic, many of us expected livestreams not to replace concerts in the future, but to supplement them — if the live tickets sold out, you could just buy one for the livestream and see the show that way. But with some exceptions, there’s been little interest in the concept. People want a real show and a real performance, although the adjective’s meaning gets more slippery every day.

I could wax with great sanctimony about the sanctity of genuine human expression. But you’ve heard and read it all before, and I’ll spare the obvious genie-already-out-of-the-bottle cliches. What is or isn’t legal is for lawyers to decide, and what is or isn’t artistically acceptable is up to the individual. AI can lead to many dangerous things, but with solid safeguards and laws, the death of human musical creativity isn’t one of them.

Variety VIP+ Explores Gen AI From All Angles — Pick a Story

Share This Post