The company announced Tuesday that its generative AI music tool ProducerAI will become part of Google Labs.
Powered by The Chain Screening, the ProducerAI platform allows users to generate music by writing natural language requests (such as “Create a lo-fi beat”). It uses Google DeepMind’s Lyria 3 music generation model and can convert text and image input to audio output.
Last week, Google announced that its Lyria 3 features would be coming to its flagship Gemini app, but ProducerAI will allow users to communicate with AI models more like “collaboration partners,” in the words of Elias Roman, senior director of product management at Google Labs.
“ProducerAI allows us to create in new ways,” Roman wrote in a blog post. “I’ve been experimenting with new genre blends, expressing my feelings with personalized birthday songs for loved ones, and creating custom workout soundtracks for myself and friends.”
Google also shared that three-time Grammy Award-winning rapper Wyclef Jean used the Lyria 3 model and Google’s Music AI Sandbox in his recent song “Back From Abu Dhabi.”
“This isn’t just a machine where you click a button 100 times and you’re done. You look at it carefully and say, ‘Oh, I think this works,'” Jeff Chan, director of product management at Google DeepMind, said in a video released by the company.
Jean wanted to know what a flute would sound like on a track he had already recorded, and recalls how he was able to quickly add the flute sound to the mix using Google’s tools.
tech crunch event
boston, massachusetts
|
June 9, 2026
“What I want everyone to understand is that (…) we are in a time where humans have to be the most creative,” Gene said in the video. “There’s one thing you have against the AI: a soul. And there’s one thing the AI has against you: infinite information.”
AI in the music industry
Some musicians are passionately opposed to the use of AI tools in the music production process. This is because it is almost a given that generative AI tools are trained on copyrighted data without the artist’s consent. Hundreds of musicians, including stars such as Billie Eilish, Katy Perry and Jon Bon Jovi, signed an open letter in 2024 calling on technology companies not to undermine human creativity with AI music generation tools.
A group of music publishers also recently sued the AI company Anthropic for $3 billion, accusing it of illegally downloading more than 20,000 copyrighted songs, including sheet music, lyrics, and songs. (Anthropic has already been ordered by a court to pay a $1.5 billion settlement to authors whose books were pirated for AI training.)
But other artists are embracing the technology’s potential not as a creative aid, but as a way to improve audio quality.
Paul McCartney used an AI-powered noise reduction system (the kind of technology that can block unwanted background noise during video calls on Zoom and FaceTime) to clean up a low-quality John Lennon demo from decades ago. As a result, the Beatles’ “new” song “Now and then” won a Grammy Award in 2025.
Meanwhile, AI music generation tools like Suno have created synthetic music that sounds realistic enough to make it to the top charts on Spotify and Billboard. Telisha Jones, a 31-year-old from Mississippi, used Snow to turn her (spontaneous) poetry into the viral R&B song “How Was I Supshed To Know,” which reportedly landed her a record deal worth $3 million with Hallwood Media.
The law remains unclear regarding the legality of using copyrighted works as training data. A federal judge, William Alsup, ruled last year that training using copyrighted data is legal, but piracy is illegal.
