Armitage Archive

This New Data Poisoning Tool Lets Artists Fight Back Against Generative AI

by Melissa Heikkilä

Original article

This page contains highlights I saved while reading This New Data Poisoning Tool Lets Artists Fight Back Against Generative AI by Melissa Heikkilä. These quotes were collected using Readwise.

Highlights

Artists who want to upload their work online but don't want their images to be scraped by AI companies can upload them to Glaze and choose to mask it with an art style different from theirs. They can then also opt to use Nightshade. Once AI developers scrape the internet to get more data to tweak an existing AI model or build a new one, these poisoned samples make their way into the model's data set and cause it to malfunction.

Permalink to this highlight


I'm just really grateful that we have a tool that can help return the power back to the artists for their own work," she says.

Permalink to this highlight


Gautam Kamath, an assistant professor at the University of Waterloo who researches data privacy and robustness in AI models and wasn't involved in the study, says the work is "fantastic."

Permalink to this highlight


Zhao admits there is a risk that people might abuse the data poisoning technique for malicious uses. However, he says attackers would need thousands of poisoned samples to inflict real damage on larger, more powerful models, as they are trained on billions of data samples.

Permalink to this highlight


Ben Zhao, a professor at the University of Chicago, who led the team that created Nightshade, says the hope is that it will help tip the power balance back from AI companies towards artists, by creating a powerful deterrent against disrespecting artists' copyright and intellectual property.

Permalink to this highlight


Autumn Beverly, another artist, says tools like Nightshade and Glaze have given her the confidence to post her work online again. She previously removed it from the internet after discovering it had been scraped without her consent into the popular LAION image database.

Permalink to this highlight


We don't yet know of robust defenses against these attacks. We haven't yet seen poisoning attacks on modern [machine learning] models in the wild, but it could be just a matter of time," says Vitaly Shmatikov, a professor at Cornell University who studies AI model security and was not involved in the research. "The time to work on defenses is now," Shmatikov adds.

Permalink to this highlight


Want more like this? See all articles or get a random quote.