This clever, “poisonous” new software is helping artists defend their work from AI

This clever, ‘poisonous’ new software is helping artists defend their work from AI

Nightshade gives artists a stealthy tool to fight back against copyright infringement.

BY Lindsey Choo

Since ChatGPT debuted in late 2022, sparking a frenzy of artificial intelligence development, artists have faced a dilemma. In order to build an audience for their work, they need to share it online. But by sharing it online, they risk having it used by tech companies, which train their AI models on the art without the artist’s consent, and without compensation.

Several ongoing lawsuits filed against AI companies allege exactly this, that the companies are training models on artists’ copyrighted art, and a class action suit is currently pending against Midjourney, Stability AI, DeviantArt, and Runway AI. In fact, ChatGPT-maker OpenAI openly admitted in January, in a submission to the U.K. parliament, that “it would be impossible to train today’s leading AI models without using copyrighted materials.”

Nightshade is working to reset the power balance.

Nightshade, a tool released in October 2023, helps artists protect their work from being scraped from the web. It acts as a “poisoning” agent that pollutes AI models by tricking them into training on incorrect data.

How Glaze led to Nightshade

The idea for the tool, developed by a team at the University of Chicago led by AI systems researcher Shawn Shan, grew from a similar one he’d created called Glaze, which artists can install and apply to their art pieces. Glaze prevents text-to-image AI models from mimicking an artist’s style by adding a layer over an art piece that dramatically changes the art style. For instance, a realistic piece, when “glazed,” may appear to an AI model as an abstract Pollock-style one. Glaze was launched in 2022 and currently has more than two million downloads. But many artists don’t have a consistent signature style that’s easy to protect this way, and AI companies, Shan says, aren’t always fazed by the passiveness of the tool.

Nightshade, which accrued more than 300,000 downloads within the first few months of its release, takes a more active approach. Instead of merely defending artists, it actively deters AI companies from training on copyrighted art. It does this by weaponizing the data from images that the AI crawlers would encounter, and adding changes to make that data problematic to scrape. 

When an artist applies Nightshade to an image of a cow in a green field, for instance, an AI model might see a different image, like a large leather purse lying in the field. If the model gets increasingly trained on “shaded” images of cows, it will eventually be convinced that cows have purse-like qualities. So, if an end user of an AI search engine prompts it to generate an image of a cow, it might generate an image of an object with leather handles and a zipper, rendering the model’s output inaccurate. (The image rendered to AI models is randomized by the software’s algorithm each time an image is run.)

“If you try my data, not only are you not able to learn anything, but your model performance will suffer from a base model perspective,” Shan says. 

Getting artists involved

Glaze and Nightshade both work well enough that they don’t impact the way the art looks for a general viewer, says Reid Southern, a concept artist and illustrator for film, who works with the Shan’s team to test the tools. (Glaze used to have some tells, particularly in artwork with more detail, but a new version, introduced in April, has improved, Southern says.)

Jon Lam, a Los Angeles-based storyboard artist, says he uses both tools on every piece of art he posts publicly on social media and online, and it only takes a few minutes to apply the tool before posting. The introduction of WebGlaze, a web version of the protective tool, allows artists to apply the protective shield by phone when away from the computer.

 

Shan and his research team have refused funding from venture capital and private firms, despite having been approached by a few, running their outfit, called The Glaze Project, as a nonprofit fueled exclusively by donations. “Most of these parties are profit-driven and pro-AI,” Shan says. “Involving these VCs will make it a lot more complicated.”

Fighting the larger battle

Following the arrival in February of OpenAI’s text-to-video AI model Sora, plus ambiguous answers OpenAI has given about what its training data is, Shan and his colleagues are also looking towards extending Nightshade and Glaze protections to video and animations.

Still, while Nightshade and Glaze offer some control and protection to artists now, they’re not meant to be a long-term solution to the problem of AI models scraping artists’ work without consent, Shan says. His team is actively engaged with the U.S. Copyright Office, the FTC, and other agencies both in the U.S. and the European Union about establishing larger protections for artists’ rights.

“Glaze and Nightshade give us a little bit of power back,” Southern says. “We feel like we can do something while we wait on other people to do the right thing.”

A research paper on the Nightshade tool, outlining its effectiveness and its existence as a tool for copyright protection is set to be presented at the IEEE Symposium on Security and Privacy later this month.

 

ABOUT THE AUTHOR

Lindsey Choo is an independent multimedia journalist with reporting experience in breaking news, technology, internet culture and investigations. She has written for The Wall Street JournalPolitico’s tech news site Protocol, and the Center for Healthy Aging and contributes to Platformer News, a newsletter focused on the intersection of Silicon Valley and democracy.


Fast Company

(18)