• Adobe’s latest generative AI experiment aims to help people create and customize music without any professional audio experience.
  • Project Music GenAI Control can adjust the generated audio “based on a reference melody” and extend the length of audio clips if you want to make the track long enough for things like a fixed animation or podcast segments.

Adobe has unveiled a groundbreaking new project, Project Music GenAI Control, that aims to revolutionize the way people create and customize music, even without professional audio experience. This innovative tool, announced at the Hot Pod Summit in Brooklyn, introduces a prototype system that enables users to generate and edit music using simple text prompts, all within a seamless workflow.

Revolutionizing music creation with Project Music GenAI Control

Imagine starting with a text description like “happy dance” or “sad jazz” and watching as music is generated in the specified style. Adobe’s integrated editing controls then empower users to personalize the results further by adjusting elements like patterns, tempo, intensity, and structure. Sections of the music can be remixed, and the audio can be transformed into a repeating loop, perfect for creating backing tracks or background music for various content needs.

What sets Project Music GenAI Control apart is its capability to adjust the generated audio based on a reference melody, offering users the flexibility to extend the length of audio clips as needed. While details about the user interface for editing the generated audio remain undisclosed, the possibilities seem endless.

Also read: AI in music therapy: How personalized harmonies can heal better and quicker

The influence on the music industry

In a move towards democratizing music creation, Adobe has made a public domain content demo available for Project Music GenAI Control. However, questions linger about the tool’s ability to handle direct audio uploads for reference material and the extent to which clips can be extended. As users eagerly anticipate further details, the potential impact of this tool on the music industry is undeniable.

Nicholas Bryan, a senior research scientist at Adobe Research, highlighted the transformative nature of these AI-powered tools in reshaping music creation. He likened the level of control provided by Project Music GenAI Control to the precision editing capabilities found in Photoshop, empowering creatives to shape and refine their audio compositions at a granular level.

Also read: Google And Universal Music Partner Up to Licence AI-Generated Song Voices

The future of AI and music production’s cooperation

Collaborating with esteemed institutions like the University of California and Carnegie Mellon University’s School of Computer Science, Adobe is positioning Project Music GenAI as an early-stage experiment with vast implications for the future of audio editing. While the tool is not yet available to the public, its development progress can be tracked on the Adobe Labs website, offering a glimpse into the company’s ongoing innovation efforts.

As the industry witnesses the emergence of similar AI-driven tools like Google’s MusicLM and Meta’s AudioCraft, the focus on empowering users to generate and edit music seamlessly is becoming increasingly pronounced. With Adobe leading the charge in this technological frontier, the convergence of AI and music production is poised to redefine creative possibilities for musicians and content creators alike.