This is the shittiest bot ever
Santabot
Santabot is an automated moderation tool, designed to reduce moderation load and remove bad actors in a way that is transparent to all and mostly resistant to abuse and evasion.
This is a community devoted to meta discussion for the bot.
There are so many problems with this.
- It would be extraordinarily easy to bot it and just silence anyone you want.
- I agree, moderation is absolutely necessary to maintaine civil discussion, but silencing people, because they have unpopular opinions, is a really bad idea.
- I love lemmy because it is the ultimate embodiment of decentralised free speech. This destroys that.
- If I were a bad actor, hypothetically, let's just say lammy.ml or haxbear and I decided I wanted to silence anyone who disagrees with what I have to say. Then I could just make a fork of this project to only value my instances votes and censor anyone who doesn't agree with what my community thinks.
- This tool simply acts as a force multiplier for those who want to use censorship as a tool for mass silencing of descent.
Oh no! It hadn't occurred to me that excluding unpopular opinions might be a problem. If only I'd thought of that, I might have looped in some other people, talked extensively about the problem and carefully watched how it was working in practice and tweaked it until it seemed like it was striking the right balance. I might have erred heavily on the side of allowing people to speak to the point that I was constantly fielding complaints from people wanting me to remove something they said shouldn't be allowed.
And furthermore, you're right. If this catches on then lemmy.ml might be able to silence dissenting views. That would be terrible.
What the hell dystopian meow meow beanz nonsense is this?
Oh no, my MeowMeowBeanz!
As I posted in the other thread, I’m very interested to see how this works out. I am definitely curious to see what the bot thinks of some of my posting habits if you are able to share that.
Sure. You have a pretty large amount of comments from the last month, pretty heavily voted on, with a ratio of about 2.7:1 positive rankings. This morning it needed to be 1:1 or more to post, and now I've changed it to be 2:1, but 2.7:1 is still well over the line.
Interactions it looks at highly positively are things like this:
- https://slrpnk.net/comment/9258523
- https://slrpnk.net/comment/9140065
- https://slrpnk.net/comment/9237926
Interactions it looks at highly negatively are things like this:
Your user is a great example of a hard situation for the bot to judge. To me, all five comments are perfectly reasonable. But you're getting downvotes from some highly trusted users on the last two, so it counts them as negative things that are outweighed by the weight of other interactions you've had.
If someone was only posting things like the last two comments, would that be ban-worthy? To the bot it would be. I would probably agree with that in most cases, even though the comments are fine, since it's indicative of a single-issue account, always getting in disagreements, which usually isn't indicative of good things for the contribution level of that user. But it's something to watch closely since ranking the last two comments negatively starts to smell like creating a single-viewpoint echo chamber.
I see this as it reaching a right judgement given pretty difficult data to interpret.
How will this be audited to ensure fascists don't game the down votes to quell pro-solarpunk, pro-liberation messaging?
Gaming the system is, I think, more unlikely than it might seem. In my auditing leading up to making it live, the problem was the opposite of that. The average fascist account, if it's not banned outright, might have a "weight" of plus or minus single digits, whereas slrpnk admins might have a weight of several hundred. Some people were getting banned just because of a single downvote from one of the admins, applied to a reasonable comment, outweighed the whole community's consensus.
I am watching the results, to some extent, and depending on good people who do receive moderation saying something if it seems unreasonable. I think it is possible to create a network of artificial votes to game the system, but you have to do a lot. It's resistant to simply massively inserting fake votes from some random account to throw off the tally. You have to engineer artificial trust for yourself, and outweigh a community consensus of millions of votes. I think that, if it even takes off to the point that defeating it becomes a focal point, the level of voting that's required to game the system will be large enough to be obvious during an audit.
Fantastic. Glad to know you thought about this
Good to know about that issue with the weight. I guess I need to stick better to the "down-vote etiquette" as by our CoC.
I wouldn't stress about it too much. I played with the tuning and did more detailed spot checking of its judgements, like the examples I sent you in DM, until I agreed with its judgements almost all the time that I checked. That's why SMOOTHING_FACTOR
is so much higher now than it used to be, to reduce the influence of single high-profile accounts. I just meant that in my testing before I had a chance to tweak it extensively, the problem was more often an overly "pro-solarpunk" judgement than the other way.
Loving this. Very walkaway (the book, not the community) vibes.