Home » Entertainment » Music » DeepMind and YouTube release Lyria, a gen-AI model for music, and Dream Track to build AI tunes | TechCrunch

Share This Post

Music

DeepMind and YouTube release Lyria, a gen-AI model for music, and Dream Track to build AI tunes | TechCrunch

DeepMind and YouTube release Lyria, a gen-AI model for music, and Dream Track to build AI tunes | TechCrunch

Back in January, Google made some waves — soundwaves, that is — when it quietly released some research on AI-based music creation software that built tunes based on word prompts. Today, its sister business Google DeepMind went several steps further: it has announced out a new music generation model called Lyria that will work in conjunction with YouTube; and two new toolsets it’s describing as “experiments” built on Lyria. Dream Track will let create music for YouTube Shorts; and Music AI tools that it says are aimed at helping with the creative process: for example building a tune out of a snipped that a creator might hum. Alongside these, DeepMind said it’s adapting SynthID — used to mark AI images — to watermark AI music, too.

The new tools are being released at a time when AI continues to court controversy in the world of creative arts. It was a key subject at the heart of the Screen Actors Guild strike (which finally ended this month); and in music, while everyone knew Ghostwriter used AI to mimick Drake and The Weeknd: the question you have to ask is whether AI creation will become more of the norm in the future.

With the new tools getting announced today, the first priority for DeepMind and YouTube appears to be creating tech that helps AI music stay credible, both as a complement to creators today, but also just in the the most aesthetic sense of sounding like music.

As Google’s past efforts have shown, one detail that often emerges is that the longer one listens to AI-generated music, the more distorted and surreal it starts to sound, moving further from the intended outcome. As DeepMind explained today, that’s in part because of the complexity of information that is going into music models, covering beats, notes, harmonies and more.

“When generating long sequences of sound, it’s difficult for AI models to maintain musical continuity across phrases, verses, or extended passages,” DeepMind noted today. “Since music often includes multiple voices and instruments at the same time, it’s much harder to create than speech.”

It’s notable, then, that some of the first applications of the model are appearing in shorter pieces.

Dream Track is initially rolling out to a limited set of creators to build 30-second AI-generated soundtracks in the “voice and musical style of artists including Alec Benjamin, Charlie Puth, Charli XCX, Demi Lovato, John Legend, Sia, T-Pain, Troye Sivan, and Papoose.”

The creator enters a topic, choosing an artist, and a track with lyrics, backing tracks, and the voice of the selected musician are used to create the 30-second piece, which is intended to be used with Shorts. An example of a Charlie Puth track here:

YouTube and DeepMind are clear to point out that these artists are involved in the project, helping test the models and giving other input.

Lyor Cohen and Toni Reed, respectively head of music for YouTube and its VP of emerging experiences and community projects, note that the set of Music AI tools that are getting released are coming out of the company’s Music AI Incubator, a group of artists, songwriters and producers working on testing and giving feedback on projects.

“It was clear early on that this initial group of participants were intensely curious about AI tools that could push the limits of what they thought possible,” they note. “They also sought out tools that could bolster their creative process.”

While Dream Track is getting a limited release today, the Music AI tools are only going to get rolled out later this year, they said. DeepMind teased three areas that they will cover: creating music in a specified instrument, or creating a whole set of instrumentation, based on humming a tune; using chords that you build on a simple MIDI keyboard to create a whole choir or other ensemble; and building backing and instrumental tracks for a vocal line that you might already have. (Or, in fact, a combination using all three of those, starting just with a simple hum.)

In music, Google and Ghostwriter are, of course, not alone. Among others that are rolling out tools, Meta open sourced an AI music generator in June; Stability AI launched one in September; and startups like Riffusion are also raising money for their efforts in the genre. The music industry is scrambling to prepare, too.

Share This Post