Artists vs AI: Who will win the copyright fight of the century?

  • The rapid advancement of AI, epitomised by OpenAI’s Sora model, has sparked global discourse on its implications, particularly concerning copyright, ethical, and legal issues in various sectors, including entertainment and media.
  • Instances such as Hollywood’s collective strike, lawsuits against OpenAI and Microsoft, and controversies like the Taylor Swift photo incident highlight growing concerns among creators regarding AI’s impact on their rights and originality.
  • Efforts to address these concerns range from legal actions to proposed legislation, emphasising the need for transparent AI training data and a balanced approach to technological innovation and creator protection.

The rapid development of generative AI technology, especially the emergence of OpenAI‘s latest video model Sora, marking a milestone in progress in the field of Generative AI (AIGC), has sparked widespread discussions globally. However, amidst this wave of AI advancements, there have also been increasingly contentious and protest voices. From Hollywood’s collective strikes to lawsuits filed by The New York Times against OpenAI and Microsoft, and even to the Taylor Swift intimate photo incident, the use of AI technology has raised a series of legal, copyright, and ethical issues. Behind this, artists are facing the dilemma of being ‘alchemised’ by AI training applications, prompting profound reflections on creators’ rights protection.

Also read: OpenAI and Microsoft face lawsuits over AI copyright infringement

Also read: ChatGPT: Is it really infringing copyright?

AI copyright disputes continue to emerge

In July 2023, Hollywood initiated a collective strike, with creators demanding a renegotiation of labour relations and resistance against the invasion of AI. Union president Fran Dreyscher emphasised that if they do not stand up to resist now, they will face the danger of being replaced by machines in the future. Following this, in December, The New York Times filed a lawsuit against OpenAI and Microsoft for using unauthorised content to train AI, sparking discussions on copyright infringement. These events reflect creators’ concerns and strengthened awareness of protecting their rights in the wave of AI.

Even more worrying, intimate photos of Taylor Swift, fabricated by AI, went viral online, causing serious damage to her reputation. This incident has sparked anger among global fans and propelled potential ethical issues of AI technology to the forefront. In China, artists are facing another dilemma as their works are being ‘alchemized’ by AI, with this unrestrained AI application posing a threat to creators’ originality and public aesthetics, becoming an urgent problem to be solved within the industry.

Also watch: Video: Taylor Swift deepfakes spark AI ethics debate


Pop quiz

When was the Sora revealed to the world?

A. February 15, 2024

B. February 25, 2024

C. January 15, 2024

D. None of the above

The correct answer is at the bottom of the article. 


Copyright is an ambiguous area in the AI model development

AIGC does not create content out of thin air; its training relies on vast datasets, the sources and usage of which may involve copyright and ethical issues. For example, unauthorised use of artists’ works to train AI models not only constitutes copyright infringement but may also cause irreversible impacts on artists’ creative styles. Concerns and criticisms regarding the potential infringement of generative AI, both domestically and internationally, are growing louder, particularly focusing on the art creativity and film media industries.

In fact, copyright has always been an ambiguous area in the development of AI models, and it is also the sword of Damocles hanging over the heads of tech giants.

In 2022, Forbes magazine interviewed Midjourney founder David Holz, asking whether he had sought copyright permission from living artists or creators before training AI. Holz clearly answered that he had not.

In his view, current copyright technology and management cannot cope with the automatic scraping of AI. There is no metadata embedded in the images related to copyright, nor is there a so-called copyright “registry,” making it impossible to find a picture on the internet and automatically track its owner. It can be said that the thunder laid by the founder a year ago has finally exploded today.
Although tracking copyright and obtaining permission is a difficult and time-consuming task that requires tremendous effort and time, shifting the risk of infringement entirely to the users is not fundamentally the solution.

People are working on the legal framework

Of course, some people are working on it. Two US congressmen, Anna Eshoo and Don Beyer, recently submitted a new bill called the “AI Foundation Model Transparency Act of 2023.”

The bill mentions that the director of the National Institute of Standards and Technology, the Copyright Office, the Office of Science and Technology Policy, and other relevant stakeholders should participate in discussions on copyright issues.

The bill also details the information that AI models should provide, including an overview and source of training data, the process by which tech companies operate the training data, and requires tech companies to explain the limitations or risks of their own models.

In this 14-page document, the importance of data transparency training regarding copyright is repeatedly emphasised. This seems to be a good direction to resolve copyright disputes. However, ideals are always beautiful, and reality is always off track. US bills often take several months or even years to be enacted from proposal to implementation.

Tech giants try to avoid their responsibility for AI copyright

Copyright protection still has a long way to go, but copyright infringement is happening all the time.

By the end of 2023, almost all tech giants have faced lawsuits alleging copyright infringement, but AI companies like OpenAI often use a combination of “fair use” and “safe harbour rules” to counter appeals.

Fair use” is a lenient treatment of copyright law in the United States, granting the right to use copyrighted material in certain special scenarios such as criticism, news reporting, education, and academic research. When determining “fair use,” multiple factors are considered, including the type of copyrighted content, the impact on the market, and whether the entity using the copyright is profitable.

However, it is evident that these tech giants have already made a fortune (as seen in Forbes’ recently published 2023 Top 10 wealth growth billionaires, with 7 of them being tech billionaires), making the principle of “fair use” inapplicable.

The “safe harbour rules” are designed to ensure that tech companies’ services can operate normally, wherein internet service providers have an obligation to take action upon knowing about the existence of infringement or infringing content, such as deleting user-infringing content. As long as measures are taken, they can be exempt from liability, hence it is also known as the “notice-and-takedown” rule.

However, the “safe harbour rules” are gradually being abused, originally intended as a shield for the good operation and development of the internet but slowly turning into a convenient tool for copyright infringement.

We need a reasonable legal and ethical framework

“To enhance the situation regarding AI copyright disputes, it’s crucial to stay current and informed about copyright law. Given that this is a rapidly evolving area of law, staying abreast of developments is essential. Consulting with an intellectual property attorney becomes paramount in navigating the nuances and complexities of AI-related copyright matters.”

Stefanie Magness, the media strategist & PR agent at Elevate U PR

Faced with the challenge of AI technology, how to balance technological innovation with the protection of creators’ rights has become an urgent issue awaiting resolution. Some artists and creators seek to uphold their rights through legal means, such as opposing unauthorised AI training usage through lawsuits or developing technical tools to prevent their works from being learned by AI. While these practices may provide some level of protection, the fundamental solution lies in constructing a reasonable legal and ethical framework to ensure the healthy development of AI technology without infringing on creators’ legitimate rights.

Stefanie Magness, the media strategist & PR agent at Elevate U PR shared her view on this, stating, “To enhance the situation regarding AI copyright disputes, it’s crucial to stay current and informed about copyright law. Given that this is a rapidly evolving area of law, staying abreast of developments is essential. Consulting with an intellectual property attorney becomes paramount in navigating the nuances and complexities of AI-related copyright matters.”

Furthermore, public understanding and awareness of AI technology need to be strengthened. Through education and outreach efforts, increasing public awareness of the potential risks of AI technology can guide healthy and rational usage of AI technology, respecting and protecting the outcomes of creative work, and collectively fostering a fair and just digital environment.

In the face of a new wave of AI advancements, we are confronted with unprecedented challenges and opportunities. In the future, how human creators engage and coexist with AI technology will be a topic that requires the participation and contemplation of all members of society. Through continuous exploration and effort, we hope to find a balance between technological innovation and humanistic care, collectively embracing a brighter future.


The correct answer is A.

Chloe-Chen

Chloe Chen

Chloe Chen is a junior writer at BTW Media. She graduated from the London School of Economics and Political Science (LSE) and had various working experiences in the finance and fintech industry. Send tips to c.chen@btw.media.

Related Posts

Leave a Reply

Your email address will not be published. Required fields are marked *