this post was submitted on 17 Jul 2024
-1 points (42.9% liked)

Santabot

16 readers
1 users here now

Santabot is an automated moderation tool, designed to reduce moderation load and remove bad actors in a way that is transparent to all and mostly resistant to abuse and evasion.

This is a community devoted to meta discussion for the bot.

founded 2 months ago
MODERATORS
auk
 

Some people have been accusing me of creating this bot so I can manifest a one-viewpoint echo chamber. They tell me that they already know that I'm trying to create an echo chamber, anything I say otherwise is a lie, and they're not interested in talking about the real-world behavior of the bot, even when I offer to fix anything that seems like a real echo chamber effect that it's creating.

I don't think it's creating an echo chamber. We've had a Zionist, an opponent of US imperialism, a lot of centrists, some never-Bideners, some fact checking, and one "fuck you." The code to delete downvoted comments from throwaway accounts is pretty much working, but it's only been triggered once. Someone said Mike Johnson's ears were ugly and that made him a bad person, which everyone hated and downvoted, so the bot deleted it since the person that said it didn't have other recent history to be able to use to categorize them. I sent the user a note explaining how the throwaway detection works.


I want to list out the contentious topics from the week, and how I judge the bot's performance and the result for each one, to see if the community agrees with me about how things are looking:

Biden's supreme court changes

I like the performance here. The pleasant comments have a diversity of opinion, but people aren't fighting or shouting their opinions back and forth at each other. The lemmy.world section looks argumentative and low-quality.

Blue MAGA

I don't love the one-sidedness of the pleasant comments section. It's certainly more productive with less argumentation, which is good, but there are only two representatives of one of the major viewpoints chiming in, which starts to sound like an attempt at an echo chamber.

I read the lemmy.world version for a while, and I started to think the result here is acceptable. The pleasant version still has people who have every ability to speak up for the minority viewpoint, but it was limited to people who were being coherent about it, and giving reasons. A lot of the people who spoke up in the lemmy.world version, on both sides, were combative and got engaged in long hostile exchanges, without listening or backing up what they were saying. That's what I don't want.

Biden's Palestine policy

I don't love "fuck you." I debated whether it was protected political speech expressing a viewpoint on the article, or a personal attack, and I couldn't decide, so I left it up. For one thing, I think it's good to err on the side of letting people say what they want to the admins, to bend over backwards just slightly to avoid a situation where some users or their viewpoints are more special, or shielded from firm disagreement, than others. And yes, I recognize the irony.

This one is my least favorite comments section. The user who's engaging in a hostile exchange of short messages has a lot of "rank" to be able to say what they want, and the current model assumes that since people generally like their comments, they should be allowed to speak their mind. The result, however, is starting to look combative to me. It's still far better than the exchanges from lemmy.world, but I don't love it.

What does everyone else think? I don't know if anyone but me cares about these issues in this depth, but I'm interested in hearing any feedback.

you are viewing a single comment's thread
view the rest of the comments
[โ€“] Five 4 points 2 months ago* (last edited 2 months ago) (1 children)

I do feel the community is trending towards an echo chamber. I think it is systemic, but I don't think it's intentional.

There's a version of the prisoner's dilemma that occurs in online debates. When both people argue in good faith and listen to each other, the discussion takes the most amount of time and mental effort, but there is also a feeling that the effort was not wasted. When one person is arguing in good faith while the other is engaging with low effort or trolling, the effort put into a good faith argument feels wasted. When both participants troll each other, nobody is seriously challenged, but neither of them waste very much time or mental effort in the process either.

This is meant to be an amoral framing of the situation. Time is limited, so time spent inventing novel arguments to convince an implacable enemy is time that could be spent doing something more effective, so trolling makes sense. Obviously when this approach is the dominant strategy in a forum, the space becomes toxic, anti-intellectual, and useless for evaluating the strength of ideas. I feel like you implicitly understand that, and are trying to create tools to make it easier to prevent that from happening.

Your tool is based on votes. People often vote for opinions they agree with, against those that they disagree with. Sometimes they vote for well-thought out arguments, and against low effort trolling. So your algorithm basically divides people into four groups. Group one are people who have both unpopular opinions and express them in toxic and low-effort ways. They are extremely likely to be banned algorithmically because they get both kinds of downvotes.

Group two are people who have unpopular opinions, but are good at expressing themselves in a way such that several people who don't agree with them still value their contribution. Your algorithm is likely to allow them to participate even with the tax of downvotes they get due to the unpopularity of their views. These people also make the most valuable contribution to a forum that is based on good faith discussion and debate, because if these people leave, you are left with the last two groups - three high effort popular opinion people, and four low effort popular opinion people. A space that includes primarily groups three and four together and excludes the other two is an echo chamber.

Group four is the problem. If they are allowed to participate in discussion without repercussions, they will eventually drive group two out, by either making them feel their time is being wasted so they leave, or by changing their strategy and joining group one. There is no simple algorithmic solution to this problem. I think your experiment has attracted a number of group two people due to the novelty of your experiment and the over-representation of anarchists on the instance you've chosen to host it, but they are not guaranteed to continue to participate. Lemmy.World is a pretty low bar to use as a measuring stick, but given the incentive structure at play, I think there is a real danger of falling below that standard unless the bot's algorithmic decisions are complemented by active human moderators who dis-incentivize and weed out people from group four.

[โ€“] auk 1 points 2 months ago

Really? I am surprised. I agree with your categories, but when I examine the comments sections, it looks like the removal of group one is moving people from group four into group three, and giving them space to talk with each other and disagree without the entire environment being so combative that it becomes impossible to do so.

The final comments section example is not ideal, but it's also not an echo chamber. The lemmy.ml version of the comments section is better, which is a problem, but none of the users from the lemmy.ml comments are banned in !pleasantpolitics@slrpnk.net, so I think the problem is cultural and not technical. I do agree with the need to protect the minority opinions from getting ganged up on by group four, but outside that one post I don't see it happening at all, and everyone's still welcome to say what they want.

There's also a key distinction within group two. Users who post only opinions that are in group two are likely to be banned. The users for which I disagree with the bot's decision almost all fit into this category. There is a large group, however, that can post opinions in group two alongside a healthy amount of positive engagement on other topics. I convinced myself that the result was okay, since most of the users that I looked at, I had to admit seemed to be engaging almost exclusively according to their chosen single issue or group of issues, and not with a balanced set of views of which some were popular and some not.

I do worry about this issue. I keep waiting for someone to bring up a specific user that is, for example, in group two, who is being banned even though their engagement is a clear net positive for the community. But so far, I've unearthed far more of those and fretted about them than anyone has sent to me. At the end of the day, I decided that aiming for perfection was impossible, and that as long as the comments seem to display a diversity of opinion and positive engagement, that was good enough to be a place to start.

Can you think of a good post to bait group four into coming in and overwhelming the comments? Or do you think these existing test cases are already showing that? It would be difficult for this approach to totally prevent that problem, without a lot of moderator intervention to enforce a productiveness standard for each comment, but gathering data about the problem can still be a good thing.