Home » Entertainment » Music » What Happens When A.I. Enters the Concert Hall

Share This Post

Music

What Happens When A.I. Enters the Concert Hall

What Happens When A.I. Enters the Concert Hall

Artificial intelligence is not new to classical music. But its recent, rapid developments have composers worried, and intrigued.

When the composer and vocalist Jen Wang took the stage at the Monk Space in Los Angeles to perform Alvin Lucier’s “The Duke of York” (1971) earlier this year, she sang with a digital rendition of her voice, synthesized by artificial intelligence.

It was the first time she had done that. “I thought it was going to be really disorienting,” Wang said in an interview, “but it felt like I was collaborating with this instrument that was me and was not me.”

Isaac Io Schankler, a composer and music professor at Cal Poly Pomona, conceived the performance and joined Wang onstage to monitor and manipulate Realtime Audio Variational autoEncoder, or R.A.V.E., the neural audio synthesis algorithm that modeled Wang’s voice.

R.A.V.E. is an example of machine learning, a specific category of artificial intelligence technology that musicians have experimented with since the 1990s — but that now is defined by rapid development, the arrival of publicly available, A.I.-powered music tools and the dominating influence of high-profile initiatives by large tech companies.

Dr. Schankler ultimately used R.A.V.E in that performance of “The Duke Of York,” though, because its ability to augment an individual performer’s sound, they said, “seemed thematically resonant with the piece.” For it to work, the duo needed to train it on a personalized corpus of recordings. “I sang and spoke for three hours straight,” Wang recalled. “I sang every song I could think of.”

Antoine Caillon developed R.A.V.E. in 2021, during his graduate studies at IRCAM, the institute founded by the composer Pierre Boulez in Paris. “R.A.V.E.’s goal is to reconstruct its input,” he said. “The model compresses the audio signal it receives and tries to extract the sound’s salient features in order to resynthesize it properly.”

Wang felt comfortable performing with the software because, no matter the sounds it produced in the moment, she could hear herself in R.A.V.E.’s synthesized voice. “The gestures were surprising, and the textures were surprising,” she said, “but the timbre was incredibly familiar.” And, because R.A.V.E. is compatible with common electronic music software, Dr. Schankler was able to adjust the program in real time, they said, to “create this halo of other versions of Jen’s voice around her.”

Tina Tallon, a composer and professor of A.I. and the arts at the University of Florida, said that musicians have used various A.I.-related technologies since the mid-20th century.

“There are rule-based systems, which is what artificial intelligence used to be in the ’60s, ’70s, and ’80s,” she said, “and then there is machine learning, which became more popular and more practical in the ’90s, and involves ingesting large amounts of data to infer how a system functions.”

Today, developments in A.I. that were once contained to specialized applications impinge on virtually every corner of life, and already impact the way people make music. Dr. Caillon, in addition to developing R.A.V.E., has contributed to the Google-led projects SingSong, which generates accompaniments for recorded vocal melodies, and MusicLM, another text-to-music generator. Innovations in other areas are driving new music technologies, too: WavTool, a recently released, A.I.-powered music production platform, fully integrates OpenAI’s GPT-4 to enable users to create music via text prompts.

For Dr. Tallon, the difference in scale between individual composers’ customized use of A.I. and these new, broad-reaching technologies represents a cause for concern.

“We are looking at different types of datasets that are compiled for different reasons,” she said. “Tools like MusicLM are trained on datasets that are compiled by pulling from thousands of hours of labeled audio from YouTube and other places on the internet.”

“When I design a tool for my own personal use,” Dr. Tallon continued, “I’m looking at data related to my sonic priorities. But public-facing technologies use datasets that focus on, for instance, aesthetic ideals that align more closely with Western classical systems of organizing pitches and rhythms.”

Concerns over bias in music-related A.I. tools do not stop at aesthetics. Enongo Lumumba-Kasongo, a music professor at Brown University, also worries about how these technologies can reproduce social hierarchies.

“There is a very specific racial discourse that I’m very concerned about,” she said. “I don’t think it’s a coincidence that hip-hop artistry is forming the testing ground for understanding how A.I. affects artists and their artistry given the centuries-long story of co-optation and theft of Black expressive forms by those in power.”

The popularity of recent A.I.-generated songs that mimicked artists like Drake, the Weeknd, Travis Scott and others have animated Dr. Lumumba-Kasongo’s fears. “What I’m most concerned about with A.I. Drake and A.I. Travis Scott is that their music is highly listenable,” she said, “and calls into question any need for an artist once they’ve articulated a distinct ‘voice.’”

For Dr. Schankler, there are key differences between using R.A.V.E. to synthesize new versions of a collaborator’s voice and using A.I. to anonymously imitate a living musician. “I don’t find it super interesting to copy someone’s voice exactly, because that person already exists,” they said. “I’m more interested in the new sonic possibilities of this technology. And what I like about R.A.V.E. is that I can work with a small dataset that is created by one person who gives their permission and participates in the process.”

The composer Robert Laidlow also uses A.I. in his work to contemplate the technology’s fraught implications. “Silicon,” which premiered last October with the BBC Philharmonic under Vimbayi Kaziboni, employs multiple tools to explore themes drawn from the technology’s transformative and disruptive potential.

Laidlow described “Silicon” as “about technology as much as it uses technology,” adding: “The overriding aesthetic of each movement of this piece are the questions, ‘What does it mean for an orchestra to use this technology?’ and ‘What would be the point of an orchestra if we had a technology that can emulate it in every way?’”

The work’s entirely acoustic first movement features a mixture of Laidlow’s original music and ideas he adapted from the output, he said, of a “symbolic, generative A.I. that was trained on notated material from composers all throughout history.” The second movement features an A.I.-powered digital instrument, performed by the orchestra’s pianist, that, “sometimes mimics the orchestra and sometimes makes uncanny, weird sounds.”

In the last movement, the orchestra is accompanied with sounds generated by a neural synthesis program called PRiSM-SampleRNN, which is akin to R.A.V.E. and was trained on a large archive of BBC Philharmonic radio broadcasts. Laidlow describes the resulting audio as, “featuring synthesized orchestral music, voices of phantom presenters and the sounds the artificial intelligence has learned from audiences.”

The size of “Silicon” contrasts with the intimacy of Dr. Schankler and Wang’s performance of “The Duke of York.” But both instances illustrate A.I.’s potential to expand musical practices and human expression. And, importantly, by employing small, curated datasets tailored to individual collaborators, these projects attempt to obviate ethical concerns many have identified in larger-scale technologies.

George E. Lewis, a music professor at Columbia University, has designed and performed alongside interactive A.I. music programs for four decades, focusing primarily on the technology’s capacity to participate in live performance. “I keep talking about real-time dialogue,” he said. “Music is so communal, it’s so personal, it’s so dialogic, it’s communitarian.”

He is hopeful that people will continue to explore interactivity and spontaneity. “It seems the current generation of A.I. music programs have been designed for a culturally specific way of thinking about music,” Lewis said. “Imagine if the culture favored improvisation.”

As a composer, Lewis is continuing to explore this topic, including his recent work “Forager,” for chamber ensemble and A.I., which was created during a 2022 residency at PRiSM. The piece marks the latest update to “Voyager,” a piece that he developed in 1985 and described as a, “virtual improvising pianist.” “Forager” enhances the software’s responsiveness to its human co-performers with new programming that enables what he called, “a more holistic recognition” of musical materials.

The differences among Dr. Schankler’s use of R.A.V.E., Robert Laidlow’s orchestral work “Silicon” and Lewis’s interactive “Forager” underscore the nuances with which composers and experimental musicians are approaching A.I. This culture celebrates technology as means to customize musical ideas and computer-generated sounds to suit specific performers and a given moment. Still, these artistic aims stand at odds with the foreboding prompted by others like Dr. Tallon and Dr. Lumumba-Kasongo.

Individual musicians can do their part to counter those worries by using A.I. ethically and generatively. But even so, as Laidlow observed, being truly individual — which is to say independent — is difficult.

“There is a fundamental problem of resources in this field,” Laidlow said. “It is almost impossible to create something computationally powerful without the assistance of a huge, technologically advanced institute or corporation.”

Share This Post