this post was submitted on 15 Sep 2024
-304 points (1.6% liked)

Santabot

27 readers
1 users here now

Santabot is an automated moderation tool, designed to reduce moderation load and remove bad actors in a way that is transparent to all and mostly resistant to abuse and evasion.

This is a community devoted to meta discussion for the bot.

founded 5 months ago
MODERATORS
auk
-304
submitted 3 months ago* (last edited 3 months ago) by auk to c/santabot
 

The steady stream of people who are telling me that the Santa moderation bot is going to delete anyone who's downvoted or disagrees with the group, is continuing unabated.

Here's an olive branch: You've got a point. It's just a black box and I juggle the parameters to some secret process to ban the people who got some downvotes, I can understand how that comes across as toxic. I might or might not be lying about taking careful time to look over its judgements and make sure that I think the impact is more positive than negative, but at the end of the day, it doesn't matter. You still have to trust my intentions and trust the bot to make good decisions, and trusting that to an automated system rarely works out well.

To me, delegating the moderation of the community to the segment of that community that's trusted and consistently upvoted by the rest of us is better than giving it to a handful of people who wield unilateral power according to random rules. I like the bot's judgements most of the time when I look at them. The question is simply whether this algorithm is actually doing that delegation effectively, or if it's just banhammering anyone who gets a couple of downvotes. I'm confident that it's doing the first thing almost all of the time.

In talks behind the scenes with other moderators, I've been going into a lot of detail about specific users and going back and forth about judgements. I also do a ton of checking behind the scenes. I don't want to do that publicly. I think it would be deeply informative to post a list of the "top ten" and "bottom ten" users, and go into detail about why the low-ranked users got where they are, but that's probably not a good idea.

What I would like to do is share that information on some level, so that people can see what's going on, instead of it being me relaying that everything's good. It's tough because I can't break down every level of detail without invading all kinds of people's privacy. That said, I do think that there's a way to be found to open up the process so people can see and give input to what's going on.

One happy medium I could do would be to have the bot post its spot-check automatically about once a week. It could pick out one random user who's barely on the borderline, and post a couple of the worst comments they made. Usually, when I'm messing around with its parameters, that's what I am trying to do. There are some comments that are clearly toxicity that have no business anywhere. There are some comments that are clearly free speech, and even if they're getting downvotes, they deserve to be heard. Then there are some comments that are on the borderline between. My goal is to set up the parameters so that the borderline rank value for a ban matches up with the users who are on that borderline.

I can see some upsides and downsides to posting that publicly. What do people think, though? What would you want to see, in order to make an informed decision about what you think of this whole approach?

you are viewing a single comment's thread
view the rest of the comments
[–] Five 2 points 3 months ago* (last edited 3 months ago) (1 children)

I love this -- Reddit used to do a yearly thing where they'd send you your top upvoted and downvoted posts and comments that was always nostalgic and fascinating to me as a user. Like canvas, I think it's an idea worth copying with a more federated framework.

Maybe you could write an action that allows Fediverse members to get a similar breakdown and visualization automatically generated and then delivered to them via direct message. People who are curious about how the bot works can message the bot and see how it views them, and then they can share the details publicly if they so choose. I think this could be really popular.

[–] auk 2 points 3 months ago (1 children)

How about this?

That's 30 days of Santa's ranking for your user, showing the comment threads that made big impacts up or down. The dotted horizontal line is 0, and the cutoff for banning a person is down below that line. Here are some anonymized examples of people who got banned:

They were doing well until, in the pink part, they posted 28 comments heatedly insisting that there's no genocide in Gaza.

I think this is informative about how the system works without being useful for gathering analytics to rig the system. You can see what kind of participation impacts it in what ways, and how to put it into the context of the sum total of your participation for the month, but the emphasis is on the comments and behavior instead of on the math. What do you think?

[–] Five 1 points 3 months ago (1 children)

Yes, this is very informative.

It's an instructive visualization, but I like it less. The spectral timeline shows how big the changes are and places them in chronologically, and you can see from a distance how contentious the month was. The line graph tells a story about being rewarded or punished for being agreeable or contradictory to the zeitgeist. It reads like the timeline for an American FICO progress graph or a Chinese social credit score, things I have a visceral reaction to. It's a dopamine hit to have a comment collect upvotes, but I'm more proud of positions that I'm confident will age well with time and were presented well, but were downvoted anyway. It is evidence that I'm not in an echo chamber, and I'm not being ignored. If I could pick which graph I got delivered to my stocking, I'd pick the spectral timeline.

The line graph is clearly better suited for discussing how the system functions though. For example, it appears a new member won't get banned for a few negative interactions early in their career, as the cutoff is below zero. The second banned user it appears, if they wait 15 days, will have a positive Santabot assessment regardless of how far down the valley they've gone during the start of the month. You chose the right level of detail to maintain their anonymity.

[–] auk 2 points 3 months ago (1 children)

What about this?

I see what you're saying. The line graph feels kind of paternalistic. It's saying that if you disagree with the herd, you're going to lose your value. I think the spectral timeline with a legend may be better, at least for a frequent posting and followup use case.

The line graph is clearly better suited for discussing how the system functions though. For example, it appears a new member won’t get banned for a few negative interactions early in their career, as the cutoff is below zero.

Yes. We give some leeway so that someone doesn't get penalized for a single random downvote early in their career, but we still need to be reactive enough that if someone makes a new account and posts a garbage comment, we jump on it. I have a process that's meant to deal with that, but it's tricky. I'm still working it out, and I rolled it out a little bit early so that it's now jumping the gun and deleting some comments from people who really shouldn't have their comments deleted.

It's tough because it's hard to test in the abstract, and by definition, the people who don't comment a lot don't leave too many comments to be able to use as test cases. What I'm planning to do is work on it a little bit more, testing in production, and once it's worked out, I'll make a post explaining it all.

[–] Five 1 points 3 months ago

I gotta say, you're really good at making visualizations. I like this one best, but even the ones I liked less were extremely informative and readable.