The tool, called Nightshade, messes up training data in ways that could cause serious damage to image-generating AI models. Is intended as a way to fight back against AI companies that use artists’ work to train their models without the creator’s permission.
ARTICLE - Technology Review
ARTICLE - Mashable
ARTICLE - Gizmodo
The researchers tested the attack on Stable Diffusion’s latest models and on an AI model they trained themselves from scratch. When they fed Stable Diffusion just 50 poisoned images of dogs and then prompted it to create images of dogs itself, the output started looking weird—creatures with too many limbs and cartoonish faces. With 300 poisoned samples, an attacker can manipulate Stable Diffusion to generate images of dogs to look like cats.
This fight is over.
If this is all artists brought to the table, it wasn’t even a fight. SD is trained on vast data sets, this little effort won’t be but a drop in the ocean.
More than that - there is no need for new inputs. Massive datasets exist independently. I’ve got one just from a long-term habit of saving images. And my big fat pile of JPGs doesn’t matter, because these models are already out there, in the wild, with communities built on screwing around with them.
The horse left the barn a year ago. It is already too late to stop this. We can bicker about moral and legal rights surrounding published content, but any suggestion of un-inventing this technology is a misguided fantasy.
There is no “if.” This fight is over.