this post was submitted on 12 Nov 2023
40 points (71.7% liked)

Technology

34912 readers
234 users here now

This is the official technology community of Lemmy.ml for all news related to creation and use of technology, and to facilitate civil, meaningful discussion around it.


Ask in DM before posting product reviews or ads. All such posts otherwise are subject to removal.


Rules:

1: All Lemmy rules apply

2: Do not post low effort posts

3: NEVER post naziped*gore stuff

4: Always post article URLs or their archived version URLs as sources, NOT screenshots. Help the blind users.

5: personal rants of Big Tech CEOs like Elon Musk are unwelcome (does not include posts about their companies affecting wide range of people)

6: no advertisement posts unless verified as legitimate and non-exploitative/non-consumerist

7: crypto related posts, unless essential, are disallowed

founded 5 years ago
MODERATORS
top 13 comments
sorted by: hot top controversial new old
[–] Even_Adder@lemmy.dbzer0.com 47 points 1 year ago (1 children)

Reminder that this is made by Ben Zhao, the University of Chicago professor who illegally stole open source code for his last data poisoning scheme.

[–] DrRatso@lemmy.ml 6 points 1 year ago

Ih, thats the same author as glaze? If so, I have heavy doubt about this one. Glaze made everything look like shit, and even then it did not really work, not to mention anyone who would actually use an artwork for training could remove glaze easily using A1111.

[–] stoy@lemmy.zip 32 points 1 year ago (1 children)

This is the same article I have read before, and it covers technology that doesn't work in reality.

The inventors need to read up on a new fringe technology called "anti aliasing", which quickly and easily removes the protection.

[–] BetaDoggo_@lemmy.world 4 points 1 year ago

That isn't neccesarily true, though for now there's no way to tell since they've yet to release their code. If the timeline is anything like their last paper it will be out around a month after publication, which will be Nov 20th.

There have been similar papers for confusing image classification models, not sure how successful they've been IRL.

[–] kennismigrant@feddit.nl 22 points 1 year ago

MIT Technology Review got an exclusive preview of the research

The article was published 3 days after the arxiv release. How is this an "exclusive preview"?

Successfully tricking existing models by a few crafted samples doesn't seem like a significant achievement. Can someone highlight what exactly is interesting here? Anything that can't be resolved by routine adjustments to loss/evaluation functions?

[–] kakes@sh.itjust.works 14 points 1 year ago

I don't believe for a second that this works, and if it did, it would be trivial to get around.

It claims to "change the pixel values imperceptibly". That just isn't how these generative models work. These models are just looking at the colors, the same way a human would. If it's imperceptible to a human, it won't affect these models. They could subtly influence it, perhaps, but it would be nothing near the scale they claim.

My first thought was that they're trying to cash in, but from what I can tell it seems to be free (for now, at least?). Is it for academic "cred"? Or do they somehow actually think this works?

It just seems to be such a direct appeal to non-tech-savvy people that I can't help but question their motivations.

[–] RobotToaster@mander.xyz 13 points 1 year ago (1 children)

Luddites trying to smash machine looms

[–] autotldr@lemmings.world 9 points 1 year ago (1 children)

This is the best summary I could come up with:


A new tool lets artists add invisible changes to the pixels in their art before they upload it online so that if it’s scraped into an AI training set, it can cause the resulting model to break in chaotic and unpredictable ways.

The tool, called Nightshade, is intended as a way to fight back against AI companies that use artists’ work to train their models without the creator’s permission.

Using it to “poison” this training data could damage future iterations of image-generating AI models, such as DALL-E, Midjourney, and Stable Diffusion, by rendering some of their outputs useless—dogs become cats, cars become cows, and so forth.

Nightshade exploits a security vulnerability in generative AI models, one arising from the fact that they are trained on vast amounts of data—in this case, images that have been hoovered from the internet.

Gautam Kamath, an assistant professor at the University of Waterloo who researches data privacy and robustness in AI models and wasn’t involved in the study, says the work is “fantastic.”

Junfeng Yang, a computer science professor at Columbia University, who has studied the security of deep-learning systems and wasn’t involved in the work, says Nightshade could have a big impact if it makes AI companies respect artists’ rights more—for example, by being more willing to pay out royalties.


The original article contains 1,108 words, the summary contains 217 words. Saved 80%. I'm a bot and I'm open source!

[–] hyydra_@lemm.ee 7 points 1 year ago (1 children)
[–] heeplr@feddit.de 5 points 1 year ago

now poison it

[–] Treczoks@kbin.social 9 points 1 year ago

Whoever invents sich a thing simply underestimates the target groups' ability to analyzes this and in a not-so-far future will filter such things out.

[–] waspentalive@lemmy.one -1 points 1 year ago

Making AI companies pay royalties would cause them to charge for any use of their AI image generators, putting such technology beyond the reach of people who could not justify paying. The rest of us will miss out on the interesting images they might have created.