this post was submitted on 13 Sep 2023
146 points (95.1% liked)
Fediverse
17683 readers
19 users here now
A community dedicated to fediverse news and discussion.
Fediverse is a portmanteau of "federation" and "universe".
Getting started on Fediverse;
- What is the fediverse?
- Fediverse Platforms
- How to run your own community
founded 4 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I see.
So what do you think would help w/ this particular challenge? What kinds of tools/facilities would help counter that?
Off the top of my head, do you think
cc @PrettyFlyForAFatGuy@lemmy.ml
I can think of some things i could implement on the lemmy server side that could help with this, i'm pretty sure that the IWF maintains a list of file hashes for CSAM and there are probably a few other hash sources you could draw from too.
so the process would be something like the following
so for known CSAM you don't have to subject mods or user to it before it gets pulled.
for new/edited media with unrecognised hashes that does contain CSAM then a mod/admin would have to review and flag it at which point the same permaban for the user, then law enforcement report could be triggered automatically.
The federation aspect could be trickier though. which is why this would probably be better to be an embedded lemmy feature rather than a third party add on.
I'm guessing it would be possible to create an automoderator that does all this on the community level and only approves the post to go live once it has passed checks.
That sounds a great starting point!
🗣Thinking out loud here...
Say, if a crate implements the
AutomatedContentFlagger
interface it would show up on the admin page as an "Automated Filter" and the admin could dis/enable it on demand. That way we can have more filters than CSAM using the same interface.