this post was submitted on 25 Jul 2023
13 points (100.0% liked)

Fediverse

19 readers
2 users here now

This magazine is dedicated to discussions on the federated social networking ecosystem, which includes decentralized and open-source social media platforms. Whether you are a user, developer, or simply interested in the concept of decentralized social media, this is the place for you. Here you can share your knowledge, ask questions, and engage in discussions on topics such as the benefits and challenges of decentralized social media, new and existing federated platforms, and more. From the latest developments and trends to ethical considerations and the future of federated social media, this category covers a wide range of topics related to the Fediverse.

founded 1 year ago
 

This is entirely the fault of the IWF and Microsoft, who create "exclusive" proprietary CSAM prevention software and then only license big tech companies to use it.

top 7 comments
sorted by: hot top controversial new old
[–] Anomander@kbin.social 24 points 1 year ago

Putting the blame on Microsoft or IWF is meaningfully missing the point.

People were responsible for moderating what showed up on their forums or servers for years prior to these tools' existence, people have been doing the same since those tools existed. Neither the tool nor it's absence are responsible for child porn getting posted to Fediverse instances. If those shards won't take action against CSAM materials now - what good will the tool do? We can't run it here and have the tool go delete content from someone elses' box.

While those tools would make some enforcement significantly easier, the fact that enforcement isn't meaningfully occurring on all instances isn't something we can point at Microsoft and claim is their fault somehow.

[–] mojo@lemm.ee 15 points 1 year ago

The same people who are mad at Meta for scraping already public information, are now mad at Microsoft for not forcing themselves into the fedi to scan all private and public content? Consistent view points are hard!

[–] OsrsNeedsF2P@lemmy.ml 12 points 1 year ago* (last edited 1 year ago) (1 children)

Publishing a list of hashes would make it trivial for abusers to know when their images are being flagged. It would be better to get M$ to do the scanning work themselves

[–] HelixDab@kbin.social 6 points 1 year ago

Bingo. It would also make it trivial to alter images just enough so that it wouldn't match the hash, and then they can post shit that would need to be manually flagged and removed.

I already see things like this with pirated media; pirates will include extraneous material bundled with the target media so that it's not automatically flagged and removed.

[–] Trekman10@sh.itjust.works 10 points 1 year ago (1 children)

https://gleasonator.com/objects/bf56ad41-7168-4db9-be17-23b7e5e08991

It totally looks to me like Big Tech is gonna try to leverage CSAM prevention against the Fediverse. "Oh you want to prevent sex crimes against CHILDREN? Sure, but only on our proprietary services because we're certainly not gonna fight CP for FREE!"

[–] BootlegHermit@kbin.social 1 points 1 year ago (1 children)

To me it seems like a push towards the whole "own nothing" idea. Whether it's something like CSAM detection or even mundane SaaS, things are slowly shifting away from the end user having control over their "own" devices.

I'm torn, because on the one hand, pedophiles and child abusers deserve the severest of consequences in my opinion; on the other hand, I also think that people should be able to do and/or say whatever they want so long as its not causing actual harm to another.

[–] elscallr@kbin.social 1 points 1 year ago

It's much more likely it's a matter of preventing their detection technology from falling into the hands of people that would wish to circumvent it.