Nightshade AI tool ‘poisons’ digital artwork to help artists fight Intellectual Property infringements

Artists that don’t want their work to be used to train AI models now have a new weapon in their arsenal – an ‘image-poisoning’ tool called Nightshade. According to a report from MIT’s Technology Review, Nightshade changes the pixels of a digital image in order to trick an AI system into misinterpreting it.

Nightshade was developed by researchers at the University of Chicago under the guidance of computer science professor Ben Zhao, and will be added as an optional setting for its previous product Glaze, another online tool that masks digital artwork and changes its pixels to confuse AI models’ perceptions of its style.

Also read: AI abuse? Disney dodges criticism over “Loki” poster

Artists and musicians file lawsuits against AI

Visual artists are not the only ones fighting for their intellectual property against being used by AI.

Major music publishers filed a blockbuster lawsuit this week accusing artificial intelligence company Anthropic of engaging in the “unauthorized acquisition and use of large volumes of copyrighted content” to train its popular Large Language Model (LLM) chatbot Claude.

Many artists and music producers have also been unhappy with AI for a long time, because their work has been used as training material by AI tools without permission, and has produced competitive products.

Artificial intelligence relies on big data, is based on some algorithms, and follows some grammar rules and methods to design. Artificial intelligence is a form of “database creation.”

AI datasets on human literature relies on a large number of literary works. The more and richer the patterns written into the data, the easier it is to learn, imitate, and develop skills.

That is, these AI models need to access and learn from a vast amount of multimedia content, including textual material and images created by artists who have neither prior knowledge nor opportunity to object to their work being used to train new commercial AI products.

In the training datasets for these AI models, many include material scraped from the web, a practice that was previously mostly supported by artists when used to index their material to provide search results. But now, many have turned against this practice because it allows for the creation of competitive works through AI.

Nightshade rebalances the artist-AI playing field

In the case of Nightshade, the artist’s counterattack against AI goes a step further: it makes the AI model learn the wrong object and scene names. For example, the researchers made an image of a dog look like a cat by injecting information into the pixels.

The researchers acknowledge their work could be used for malicious purposes. According to the MIT Tech Review article on their work, their “hope is that it will help tip the power balance back from AI companies towards artists, by creating a powerful deterrent against disrespecting artists’ copyright and intellectual property.”

Since the debut of ChatGPT, generative artificial intelligence (AI) has entered an era of rapid development, but the opposition is growing, especially over the perceived unfairness of how training data is gathered.

Ivy-Wu

Ivy Wu

Ivy Wu was a media reporter at btw media. She graduated from Korea University with a major in media and communication, and has rich experience in reporting and news writing.

Related Posts

Leave a Reply

Your email address will not be published. Required fields are marked *