This system runs on the assumption that A) massive generalized scraping is still required B) You maintain the metadata of the original image C) No transformation has occurred to the poisoned picture prior to training(Stable diffusion is 512x512). Nowhere in the linked paper did they say they had conditioned the poisoned data to conform to the data set. This appears to be a case of fighting the last war.
Technology
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
It is likely a typo, but "last AI war" sounds ominous π
Takes image, applies antialiasing and resize
Oh, look at that, defeated by the completely normal process of preparing the image for training
Unfortunately for them thereβs a lot of jobs dedicated to cleaning data so Iβm not sure if this would even be effective. Plus thereβs an overwhelming amount of data that isnβt βpoisonedβ so it would just get drowned out if never caught
Imagine if writers did the same things by writing gibberish.
At some point, it becomes pretty easy to devalue that content and create other systems to filter it.
if writers did the same things by writing gibberish.
Aka, βXβ
I mean isn't that eventually going to happen? Isn't ai going to eventually learn and get trained from ai datasets and small issues will start to propagate exponentially?
I just assume we have a clean dataset preai and messy gross dataset post ai... If it keeps learning from the latter dataset it will just get worse and worse, no?
Not really. It's like with humans. Without the occasional reality checks it gets weird, but what people chose to upload is a reality check.
The pre-AI web was far from pristine, no matter how you define that. AI may improve matters by increasing the average quality.
nightshade and glaze never worked. its scam lol
Shhhhh.
Let them keep doing the modern equivalent of "I do not consent for my MySpace profile to be used for anything" disclaimers.
It keeps them busy on meaningless crap that isn't actually doing anything but makes them feel better.
Artists and writers should be entitled to compensation for using their works to train these models, just like any other commercial use would. But, you know, strict, brutal free-market capitalism for us, not the mega corps who are using it because "AI".
Let's see how long before someone figures out how to poison, so it returns NSFW Images
You can create NSFW ai images already though?
Or did you mean, when poisoned data is used a NSFW image is created instead of the expected image?
Definitely the last one!
companies would stumble all over themselves to figure out how to get it to stop doing that before going live. source: they already are. see bing image generator appending "ethnically ambiguous" to every prompt it receives
it would be a herculean if not impossible effort on the artists' part only to watch the corpos scramble for max 2 weeks.
when will you people learn that you cannot fight AI by trying to poison it. there is nothing you can do that horny weebs haven't already done.
It can only target open source, so it wouldn't bother corpos at all. The people behind this object to not everything being owned and controlled. That's the whole point.
This doesn't actually work. It doesn't even need ingestion to do anything special to avoid.
Let's say you draw cartoon pictures of cats.
And your friend draws pointillist images of cats.
If you and your friend don't coordinate, it's possible you'll bias your cat images to look like dogs in the data but your friend will bias their images to look like horses.
Now each of your biasing efforts become noise and not signal.
Then you need to consider if you are also biasing 'cartoon' and 'pointillism' attributes as well, and need to coordinate with the majority of other people making cartoon or pointillist images.
When you consider the number of different attributes that need to be biased for a given image and the compounding number of coordinations that would need to be made at scale to be effective, this is just a nonsense initiative that was an interesting research paper in lab conditions but is the equivalent of a mouse model or in vitro cancer cure being taken up by naturopaths as if it's going to work in humans.
So it sounds like they are taking the image data and altering it to get this to work and the image still looks the same just the data is different. So, couldn't the ai companies take screenshots of the image to get around this?
Not even that, they can run the training dataset through a bulk image processor to undo it, because the way these things work makes them trivial to reverse. Anybody at home could undo this with GIMP and a second or two.
In other words, this is snake oil.
The general term for this is adversarial input, and we've seen published reports about it since 2011 when ot was considered a threat if CSAM could be overlayed with secondary images so they weren't recognized by Google image filters or CSAM image trackers. If Apple went through with their plan to scan private iCloud accounts for CSAM we may have seen this development.
So far (AFAIK) we've not seen adversarial overlays on CSAM though in China the technique is used to deter trackng by facial recognition. Images on social media are overlaid by human rights activists / mischief-makers so that social media pics fail to match secirity footage.
The thing is like an invisible watermark, these processes are easy to detect (and reverse) once users are aware they're a thing. So if a generative AI project is aware that some images may be poisoned, it's just a matter of adding a detection and removal process to the pathway from candidate image to training database.
Similarly, once enough people start poisoning their social media images, the data scrapers will start scaning and removing overlays even before the database sets are sold to law enforcement and commercial interests.