Why I loathe AI-generated music: When melody loses its humanity

  • The advancement of AI-generated music, while technologically impressive, raises significant ethical and legal concerns over intellectual property, authenticity, and the potential impact on human artists’ livelihoods and creative expression.
  • Regulatory challenges, including attribution and cross-jurisdictional issues, complicate the landscape, underscoring the need for clearer guidelines and international cooperation to navigate the complexities of this evolving field.

OUR TAKE
While I recognise the potential benefits of AI-generated music, I cannot help but feel a profound sense of discomfort at the thought of machines replacing the human touch in one of the most emotive forms of art. Music is more than just a product; it’s a reflection of our humanity, and I hope that as technology advances, we remember to preserve the essence of what makes music truly special.
–Vicky Wu, BTW reporter

From ancient bone flutes to today’s AI-generated tunes, music has undergone a remarkable transformation. New AI music generators now allow virtually anyone to create symphonies from simple text prompts. Yet, there’s a concern that these technological marvels might erode the very heart of music—the human touch and emotional depth that stir our souls.

Imagine a scenario where AI-produced melodies replace the rich tapestry of human emotion embedded in every note and chord. The subtle imperfections that give music its soul—such as a singer’s wavering voice or the unique twang of a guitar string—are what truly engage our senses and emotions. If AI takes over, we might find ourselves surrounded by perfectly crafted tunes that lack the depth and nuance of a human narrative.

The evolution of AI in music: From whimsy to reality

Early experiments and the rise of AI-generated music

Let me start by saying that I’m not a Luddite; I love technology and its potential to change our lives for the better. However, when it comes to AI-generated music, I find myself feeling a peculiar sense of unease. Generative AI, when applied to music, harnesses artificial intelligence algorithms to create original musical compositions, arrangements, and performances. These systems typically use machine learning, often trained on extensive datasets of existing musical works, to understand the patterns, structures, and stylistic elements of music. Once trained, generative AI can produce new pieces of music that reflect the style of the training data, but with unique variations and original content. To date, generative AI has intersected with music in various intriguing ways, aiding composers, enhancing film and video game soundtracks, and creating interactive live performances. While all this sounds impressive, let me explain why I think it’s a step too far.

Early forays into AI-generated music were often met with a mix of amusement and intrigue. The AI program Flow Machines produced “Daddy’s Car,” a catchy tune with a melody reminiscent of The Beatles. Meanwhile, OpenAI’s “Jukebox Samples” created brief musical snippets that parodied the styles of iconic artists, such as Céline Dion and Frank Sinatra. These early attempts, while charming, felt somewhat distant and generalised, akin to capturing the essence of an artist’s work through a glass darkly. Titles like “Country, in the style of Alan Jackson” left little doubt about their inspiration, hinting at the looming copyright challenges. Now, don’t get me wrong, I found these early experiments amusing, but there was something missing – the soul, the passion, the human touch that makes music truly resonate.


Pop quiz

What was one of the early AI-generated songs that reflected the style of a famous artist?

A. “Daddy’s Car” in the style of The Beatles

B. “Country” in the style of Dolly Parton

C. “Smooth Operator” in the style of Sade

D) “My Way” in the style of Frank Sinatra

The correct answer is at the bottom of the article.


Sophistication and realism

However, the landscape of AI-generated music has evolved significantly. The advent of deepfake technology has ushered in a new era of digital impersonation, one that blurs the lines between the real and the fabricated. A YouTube audio clip featuring a voice eerily resembling Jay-Z reciting Shakespeare with his signature smooth and commanding delivery, and a track titled “Heart on My Sleeve” sounding like a collaboration between Drake and The Weeknd, though neither artist was involved, became so convincing that record labels demanded the content be removed. In a surprising twist, Drake himself embraced the technology, using deepfaked voices of 2Pac and Snoop Dogg for a diss track aimed at Kendrick Lamar. While these developments showcase the sophistication of AI, they also highlight my concerns about authenticity and the ethics of impersonation.

Ethical questions and the future

These developments underscore the growing sophistication of AI in music and the ethical questions they raise. While initial experiments were largely seen as benign and entertaining, the increasing realism of these creations raises concerns about authenticity, ownership, and the impact on human artists. One wonders what the future holds for music creation and distribution, and how these technologies will continue to shape the industry. Will we see a day when AI-generated music becomes indistinguishable from human-created works? And how will the legal and ethical frameworks adapt to accommodate this rapidly evolving landscape? I worry that we’re heading down a path where the very essence of music – the human element – is lost.

Also read: AI lawsuit from music labels sparks battle over creativity rights

The impact of AI on the music industry

Intellectual property and ownership

A major concern is the protection of intellectual property. With AI systems capable of creating music that closely resembles the work of human artists, there are pressing questions about ownership and attribution. If an AI system creates a song that sounds remarkably similar to a particular artist’s style, who owns the rights to that song? How should the original artist’s contributions be acknowledged, if at all? Country musician Tift Merritt, whose hit “Traveling Alone” was imitated by AI music platform Udio to create “Holy Grounds”, dismissed the AI-generated track as lacking transformative power and labelled it “theft”. Merritt, alongside prominent artists like Billie Eilish, Nicki Minaj, and Stevie Wonder, signed an open letter warning that AI-generated music could undermine creativity and sideline human artists. Major record labels, including Sony Music, Universal Music Group, and Warner Music, have expressed concern and initiated legal action against Udio and another AI music company, Suno. These lawsuits mark the beginning of significant copyright battles over AI-generated content within the music industry. I believe that the legal system needs to catch up with the technology to ensure that artists are protected and credited appropriately.

“Generative AI models generally compete with their training data. There’s frankly a limited amount of time that people spend listening to music. There’s a limited royalty pool. And so the more of the music that is made with these systems, the less is going to human musicians.”

Ed Newton-Rex, VP of audio at Stability AI

Impact on musicians and authenticity

There are also fears that the widespread adoption of AI-generated music could negatively impact the livelihoods of human musicians. As AI becomes more capable of producing high-quality music, it might replace some jobs traditionally held by human composers, performers, and producers. Critics argue that AI-generated music lacks the genuine emotional connection that comes from human expression and creativity. There is a fear that AI-generated music could flood the market, leading to a saturation that might dilute the quality and diversity of music available. This could also make it more difficult for emerging artists to gain recognition in an already crowded field. Personally, I believe that music is about more than just notes and rhythms; it’s about the stories behind them, the emotions they evoke, and the experiences they capture.

Ethical considerations and artist consent

Besides, there are ethical concerns about using AI to mimic or clone the voices of living or deceased artists, raising questions about consent and the integrity of an artist’s legacy. A digital recreation of Ariana Grande’s distinctive voice, achieved through sophisticated deep learning models meticulously trained on extensive libraries of her vocal performances and interviews, has caused a stir online. By analysing countless hours of audio recordings, these models have been able to replicate Grande’s unique timbre and intonation with remarkable accuracy. This has sparked both fascination and concern among fans and industry insiders, highlighting the ethical dilemmas surrounding the use of AI in recreating artists’ voices without their explicit permission or the permission of their estates. I find it unsettling that someone could essentially be cloned without their consent, and I worry about the implications this has for the future of personal identity and artistic integrity.

Also read: AI music startup Suno admits using copyrighted songs in AI training

Navigating the regulatory challenges

Regulatory frameworks and complexities

Regulating AI-generated music is challenging due to a myriad of factors that intertwine legal, technical, and ethical complexities. Rapid technological advancements in AI mean that the technology evolves at a pace that often outstrips existing regulatory frameworks. As AI capabilities improve, the boundaries of what is possible expand, making it difficult for regulations to keep up. For example, the ability of AI systems to create music that closely mimics the style of specific artists raises complex questions about ownership and attribution. When an AI system creates a piece of music, determining who owns the rights—whether it is the creator of the AI, the person who operates it, or the entity that provides the data used to train the AI—can be a legal minefield. I think we need clearer guidelines and regulations to prevent exploitation and ensure that the rights of artists are respected.

Attribution and cross-jurisdictional issues

Furthermore, attribution is a significant issue, particularly when AI-generated music closely mirrors the style of an artist. Determining the extent to which the AI is simply emulating an artist’s style versus actually copying their work is complex and often subjective. Cross-jurisdictional issues add to the complexity, as the global nature of the internet means that AI-generated music can be distributed across multiple jurisdictions, each with its own set of laws and regulations. Harmonising these different legal frameworks is a significant challenge, especially given the lack of legal precedent in this relatively novel area. I believe that international cooperation is essential to address these issues effectively.

Ethical considerations and technological ambiguity

Ethical considerations also play a crucial role, particularly when AI is used to mimic or clone the voices of living or deceased artists without their consent. Balancing the interests of artists, consumers, and innovators while respecting privacy and legacy is a complex task. Technological ambiguity further complicates efforts to regulate AI effectively, as the inner workings of AI systems can be opaque, making it difficult to understand exactly how a piece of music was generated. I think transparency and accountability are key to addressing these ethical concerns.

“The real dividing line between useful and disastrous is very simple. It’s whether the producers of the music or whatever else is being injected [as training data] have a real, functional right of consent. [AI music generators] regurgitate what they ingest, and oftentimes they produce things with large chunks of copyrighted material. That’s the output. But even if they haven’t, even if the output isn’t the violation, the ingestion itself is a violation.”

Marc Ribot, member of the Music Workers Alliance’s steering committee on AI

The ELVIS Act and the path forward

In March, Tennessee became the first US state to enact legislation aimed at curbing the misuse of AI in the music industry. The ELVIS Act, named in reference to legal battles over the unauthorised use of Elvis Presley’s likeness, highlights the growing concern over the ethical implications of AI-generated music and the urgent need for regulatory frameworks to safeguard artists’ rights and the integrity of their work. During the legislative process, co-sponsors of the bill and representatives from the state’s music community, including country and contemporary Christian artists, passionately advocated for the need to protect artists from having their voices cloned and their words misattributed. While I appreciate the intent behind such legislation, I hope it doesn’t stifle innovation entirely.

Collaborative efforts and comprehensive frameworks

Addressing these challenges requires a collaborative effort between technologists, legal experts, artists, and policymakers to develop comprehensive and adaptable regulatory frameworks that balance innovation with the protection of intellectual property, artistic integrity, and consumer rights. As the technology continues to evolve, striking this balance will be crucial to ensuring a vibrant and equitable music industry. I believe that a balanced approach that fosters innovation while protecting the rights of artists and the integrity of music is the way forward.


The correct answer is A. “Daddy’s Car” in the style of The Beatles

Vicky-Wu

Vicky Wu

Vicky is an intern reporter at Blue Tech Wave specialising in AI and Blockchain. She graduated from Dalian University of Foreign Languages. Send tips to v.wu@btw.media.

Related Posts

Leave a Reply

Your email address will not be published. Required fields are marked *