this post was submitted on 07 Oct 2023
218 points (95.0% liked)

Technology

59436 readers
3604 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

How Ashton Kutcher’s ‘non-profit start-up’ makes millions from the EU’s fight against child abuse on the net::The ‘non-profit start-up’ Thorn, founded by actor Ashton Kutcher, is a driving force behind the EU’s campaign to scan the net for child abuse material. Newly public documents and financial information obtained by Follow the Money reveal the blurred boundaries between Thorn’s do-good public face and the powerful business behind it.

all 18 comments
sorted by: hot top controversial new old
[–] mostvexingparse@lemmy.world 96 points 1 year ago

He's also not "fighting child abuse in the EU", he is lobbying for mass surveillance. Europol and the BKA (German federal police) already said that they would like to use the technology for others crimes. Especially in Germany, laws that were meant to fight child abuse or terrorism were actually used to hunt down small weed dealers and relatively harmless political activists (while over 600 nazis with open arrest warrants roam free).

[–] bernieecclestoned@sh.itjust.works 26 points 1 year ago (3 children)

Safer, Thorn’s flagship software product, was launched in 2018. Backed by Microsoft’s PhotoDNA technology and with technical support from Amazon Web Services, Safer is designed to detect child abuse by matching hash values of pictures or videos uploaded by users with a database of millions of known CSAM images.

I don't really understand how this works. Are the hash values like meta data? Is there a way to maintain privacy but also check for illegal material?

[–] EvilBit@lemmy.world 29 points 1 year ago* (last edited 1 year ago) (1 children)

Hashing, at its simplest, is turning an arbitrarily large chunk of data into a single hopefully unique value.

For example, if I wanted to hash a 4-letter word, the simple version would be as such:

` H A S H

` 8 1 19 8

If we take the numeric value of each, we can add those together and get 36. If the number gets too high, there would be a clamping mechanism to keep it manageable. For our simplistic example, we could chop off any hundreds place digits or higher. Now if I were to hash a different four-letter word, the odds of it having the same hash value (known as a “collision”) are low. Thus if you tell me you sent me a message with a hash of 36, I can look at the message you sent, calculate the hash, and confirm that it’s the same message you intended to send with a certain degree of confidence.

Now modern hashing is vastly more complicated (https://en.m.wikipedia.org/wiki/MD5), but the gist is the same. Take the data in a file, jam it all together through an algorithm to come up with a hash value, then use that to find equivalent files.

The problem here is that if it’s a classic data validation hash algorithm, changing just a single bit can change the entire hash, which would foil an identification system. So hopefully this system actually hashes images based on some kind of relative semantic information within the photo, such as color distributions and features so even if you crop or adjust the image slightly the hash still matches.

[–] Ledivin@lemmy.world 11 points 1 year ago (2 children)

It sounds like they have a database of CSAM images, so they're likely hashing parts of the suspected image or video and searching for that hash in the database

[–] bernieecclestoned@sh.itjust.works 7 points 1 year ago (1 children)

Right, so I guess the software is doing that prior to it being encrypted and sent.

Is the problem that it could do more than just scan for CSAM, ie. Anything that the government, or dictator, decides?

[–] ultranaut@lemmy.world 6 points 1 year ago (1 children)

That's exactly the risk. There's no way to implement this type of client side scanning without building infrastructure that can then also be used to scan for other things.

Got it. Thanks!

[–] p03locke@lemmy.dbzer0.com 1 points 1 year ago (1 children)

This isn't like an anti-virus system. You can't just catalog them all. It's too easy to create. Hell, with the advent of LLMs and AI-generated images, it's going to be even easier to create.

[–] EvilBit@lemmy.world 2 points 1 year ago

Too easy to create and too easy to foil the hash, unless it’s some kind of highly sophisticated feature-based hashing.

[–] tankplanker@lemmy.world 4 points 1 year ago

This one breaks the images in their database into blocks, hashes those blocks separately, then checks your images by matching those hashes against same sized block hashes. It needs only a certain number to claim a positive match, and in theory it should be manually checked.

However pedos tend to have 10000s of images and they aren't going to all be manually checked so the process is going to be trusted rather than proven for each case. This is risky as it's only matching some blocks rather than all the blocks for the whole image and can be defeated by simple filters or changing enough of the blocks to make the test meaningless.

[–] AnanasMarko@lemmy.world 15 points 1 year ago (1 children)
[–] JoBo@feddit.uk 12 points 1 year ago

12ft.io requires a VPN for some people, here's an archive link for convenience.