this post was submitted on 26 Jul 2024
67 points (100.0% liked)

Technology

37569 readers
580 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
 

Since the beginning of the generative AI boom, content creators have argued that their work has been scraped into AI models without their consent. But until now, it has been difficult to know whether specific text has actually been used in a training data set.

Now they have a new way to prove it: “copyright traps” developed by a team at Imperial College London, pieces of hidden text that allow writers and publishers to subtly mark their work in order to later detect whether it has been used in AI models or not. The idea is similar to traps that have been used by copyright holders throughout history—strategies like including fake locations on a map or fake words in a dictionary.

These AI copyright traps tap into one of the biggest fights in AI. A number of publishers and writers are in the middle of litigation against tech companies, claiming their intellectual property has been scraped into AI training data sets without their permission. The New York Times’ ongoing case against OpenAI is probably the most high-profile of these.

The code to generate and detect traps is currently available on GitHub, but the team also intends to build a tool that allows people to generate and insert copyright traps themselves.

top 7 comments
sorted by: hot top controversial new old
[–] Toes@ani.social 9 points 1 month ago (2 children)

Has anyone seen the anti AI art where people draw 3d shapes in faint lines over the real art.

I can't find a good example of it

[–] Mothra@mander.xyz 4 points 1 month ago (1 children)

I haven't, I've only heard of Nightshade and Glaze as promoted on Cara but it's not what you describe

[–] noodlejetski@lemm.ee 1 points 1 month ago (1 children)
[–] Mothra@mander.xyz 2 points 1 month ago

Fair enough, though there aren't any bulletproof solutions for this. I am inclined to think the solution would be with encryption and security and not so much altering the images themselves. Fuck AI, corporate greed and the uselessly slow legal system.

[–] averyminya@beehaw.org 4 points 1 month ago

There was one video I saw that sounds like this, where there was an overlay of noise (static in rainbow). They scaled up the noise, overlayed it on the drawing, then lowered the opacity and blended it.

They claimed it prevented AI from being able to use it for training, but it just isn't true. All it did was add texture to the art, it wouldn't prevent AI from anything, except maybe solid colors if it was only trained on these sorts of images.

[–] drwho@beehaw.org 6 points 1 month ago

Thing is, how well is it really going to work? How much is it going to cost to sue one of these companies? Because they certainly have legal representation on speed dial, and way more money available than any of us.

[–] fine_fund874@api.clubsall.com 4 points 1 month ago

a sort of fingerprinting?