This New Data Poisoning Tool Lets Artists Fight Back Against Generative AI
by Melissa Heikkilä
This page contains highlights I saved while reading This New Data Poisoning Tool Lets Artists Fight Back Against Generative AI by Melissa Heikkilä. These quotes were collected using Readwise.
Highlights
Artists who want to upload their work online but don't want their images to be scraped by AI companies can upload them to Glaze and choose to mask it with an art style different from theirs. They can then also opt to use Nightshade. Once AI developers scrape the internet to get more data to tweak an existing AI model or build a new one, these poisoned samples make their way into the model's data set and cause it to malfunction.
I'm just really grateful that we have a tool that can help return the power back to the artists for their own work," she says.
Gautam Kamath, an assistant professor at the University of Waterloo who researches data privacy and robustness in AI models and wasn't involved in the study, says the work is "fantastic."
Zhao admits there is a risk that people might abuse the data poisoning technique for malicious uses. However, he says attackers would need thousands of poisoned samples to inflict real damage on larger, more powerful models, as they are trained on billions of data samples.
Ben Zhao, a professor at the University of Chicago, who led the team that created Nightshade, says the hope is that it will help tip the power balance back from AI companies towards artists, by creating a powerful deterrent against disrespecting artists' copyright and intellectual property.
Autumn Beverly, another artist, says tools like Nightshade and Glaze have given her the confidence to post her work online again. She previously removed it from the internet after discovering it had been scraped without her consent into the popular LAION image database.
We don't yet know of robust defenses against these attacks. We haven't yet seen poisoning attacks on modern [machine learning] models in the wild, but it could be just a matter of time," says Vitaly Shmatikov, a professor at Cornell University who studies AI model security and was not involved in the research. "The time to work on defenses is now," Shmatikov adds.
Want more like this? See all articles or get a random quote.