this post was submitted on 25 Aug 2024
6 points (80.0% liked)

Santabot

26 readers
1 users here now

Santabot is an automated moderation tool, designed to reduce moderation load and remove bad actors in a way that is transparent to all and mostly resistant to abuse and evasion.

This is a community devoted to meta discussion for the bot.

founded 4 months ago
MODERATORS
auk
 
  1. It would be extraordinarily easy to bot it and just silence anyone you want.
  2. I agree, moderation is absolutely necessary to maintaine civil discussion, but silencing people, because they have unpopular opinions, is a really bad idea.
  3. I love lemmy because it is the ultimate embodiment of decentralised free speech. This destroys that.
  4. If I were a bad actor, hypothetically, let's just say lammy.ml or haxbear and I decided I wanted to silence anyone who disagrees with what I have to say. Then I could just make a fork of this project to only value my instances votes and censor anyone who doesn't agree with what my community thinks.
  5. This tool simply acts as a force multiplier for those who want to use censorship as a tool for mass silencing of descent.

Yes, I've read the Q&A, But I can simply think of more ways to abuse this bot for bad than it can be used for good.

top 10 comments
sorted by: hot top controversial new old
[–] Tiresia 3 points 2 months ago (1 children)

First off, no rules in a centralized system can survive corrupt admins/moderators. At best, the rules can make it difficult for the admins/mods to hide their malfeisance. If we don't assume good faith from the admins, this discussion is pointless because we should just leave this instance.

Second, upvotes and downvotes already moderate discussion. The default comment sorting algorithm prioritizes upvoted comments and hides downvoted comments, and people do tend to treat downvoted comments negatively. Popularity already matters, it's just a matter to what extent each thread gets you a fresh start.

  1. Right now, slrpnk account generation is gatekept by the mods. You have to pass a Turing test to be let in. This makes it difficult to amass a sufficient army of bots without mod assistance. It's worth looking out for, but not expected by any means.

  2. Agreement and dislike are different things. Empirically, people can become more hardened in their opinions if they see crappy disagreement - that's why organizations like FOX NEWS show a constant cavalcade of liberals and leftists being stupid. As long as people upvote well-formulated disagreement, this could actually improve discussion because it filters out the comments that would never have convinced anyone anyway. That's a big "as long as", so it's worth seeing in practice whether or not it holds.

  3. Lemmy instances have admins and moderators with absolute unaccountable power over bannings. It has never been decentralized or pro-free speech in the ways santabot might have destroyed in a more fundamentally anarchic social media. If you want to make use of Lemmy's decentralization, make your own instance and see who wants to let you crosspost. If you want more, make your own social media platform that is (more) fully decentralized.

  4. Yes. Bad actors gonna act bad. Stay away from places that give them authority.

  5. Not very well. You're leaving it up to the whims of the voting public. It would be easier to write a bot that asks ChatGPT whether a user holds certain opinions and ban them if it says yes. Or deputize more (informal) mods to ban people based on their personal opinion.

It is natural that an object can be used for bad in more ways than it can be used for good. 'Good' is a fragile concept, while 'bad' is everything else. A kitchen knife can be used for bad more easily and in more different ways than it can be used for good. So can a brick or a water bottle. The question is whether its use here pumps towards good, both now and in the future.

I understand expecting this experiment to go poorly, but I think it's excessive to say the experiment should not be run at all.

[–] muntedcrocodile@lemm.ee 3 points 2 months ago* (last edited 2 months ago) (1 children)

I think that's the key, votes moderate comments and posts in terms of sorting. They don't moderate it in terms of outright silencing an opinion or idea.

  1. I'm assuming that the voting is based on all accounts across all instances, so it's not just your instance whose account creation rules matter, it's all instances across the Fediverse, right?
  2. I think ultimately people vote based on preconceived biases more than they will on the validity of an argument or its facts. I'd definitely love to see some data on how the experiment plays out. It'd be quite interesting if we could get that in full.
  3. I guess not necessarily free speech but more marketplace of ideas. I guess my main concern here is that it will get implemented across the Fediverse without Admins and moderators thinking about the long-term effects of such a system.
  4. I prefer instances that have a more open policy in terms of defederation. I feel this tool could provide people who are willing to go to the lengths of vote manipulation, direct moderation capabilities without having to be a moderator in the community itself. Hence, I believe this would lead to instances with more open federation policies being more susceptible to manipulation by extremists.
  5. Sure, but by the misuse of this tool I can affect the moderation of an individual on a community that I don't have any moderation powers in.

I definitely think it's an interesting experiment that's worth running. But I'm hesitant to see what the outcomes of it will be if it gains mass adoption.