this post was submitted on 20 Sep 2023
221 points (89.6% liked)

Comics

5924 readers
6 users here now

This is a community for everything comics related! A place for all comics fans.

Rules:

1- Do not violate lemmy.ml site-wide rules

2- Be civil.

3- If you are going to post NSFW content that doesn't violate the lemmy.ml site-wide rules, please mark it as NSFW and add a content warning (CW). This includes content that shows the killing of people and or animals, gore, content that talks about suicide or shows suicide, content that talks about sexual assault, etc. Please use your best judgement. We want to keep this space safe for all our comic lovers.

4- No Zionism or Hasbara apologia of any kind. We stand with Palestine 🇵🇸 . Zionists will be banned on sight.

5- The moderation team reserves the right to remove any post or comments that it deems a necessary for the well-being and safety of the members of this community, and same goes with temporarily or permanently banning any user.

Guidelines:

founded 4 years ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] ondoyant@beehaw.org 1 points 1 year ago

Those two sentences are contradictory. There is no such thing as lawful, violent speech, nor unlawful, non-violent speech. No violent speech is protected; no non-violent speech is prohibited.

i've given several examples where that isn't as clear cut, but whatever. speech is a behavior, and can modulate how we act. if you tell people that a group of people is evil, and never say what to do about it, you still increase the likelihood that somebody will act on the belief that that group of people is evil. there are material consequences for speech between causing violence and not causing violence.

We don’t have an authority to tell us exactly where that line is. We do have the consensus of society in general, who we can consult - formally or informally - on whether that line has been crossed.

the barrier of lawfulness, violence, and all that are socially defined, yes, but if you concede that much, then there will be communities that define racism, bigotry, and other forms of inflammatory speech as violent, and decide that those things ought not to be in their social spaces. unless you're appealing to the group consensus of the largest possible group, there will be subcultures that disagree with each other on what does and doesn't constitute violent speech. if you're appealing to the legality of speech, you aren't appealing to group consensus, you're appealing to the government. so either we as autonomous communities ought to draw our own lines for what is and isn't violent speech ourselves (what i believe), or there is a precise legal definition we have to adhere to, given to us by the government. in reality, its both. there are firm lines of conduct that the government prohibits in theory (though i would dispute their efficacy), and there are communities that disagree on what the limit should be. i don't think that having codes of conduct in this way is necessarily authoritarian.

“Content moderation” replaces that societal consensus with authoritarian opinion. When you decide I don’t need to hear from Redneck Russell about how he hates Jews, I am harmed. I don’t get to challenge Russell’s opinions, or argue with him, or rally people against him. In silencing him, you’ve taken away my ability to engage him. He still gets to recruit his disciples into his own little spaces out of your control. If I try to engage him there, he merely silences me, censors me. His acolytes never hear a dissenting opinion against him, because he, and you, have decided I don’t need to engage him.

to be clear, i am here talking to you because i prefer the model that federated services use for moderating their communities, and believe that having tech companies be the sole arbiter of what is and isn't proper speech is a fundamentally flawed approach. that being said, the problem i have with your solution is one that's shared with a lot of community moderation on platforms. it relies on people being willing and able to confront and defuse bigotry on an individual level. i'm jewish. i don't want to hear what Redneck Russell has to say. i doubt that i could say anything to him to change his mind, and i don't want my internet experience to be saturated in Russells, for the basic reason that i want my time online to be relatively relaxing. people who are less attached to jewish identity are even less likely to engage with him, because it doesn't affect them personally, internet arguments are often unpleasant, and they also want their time online to be relatively relaxing. so how do things pan out if a community is only loosely engaged? well, if we aren't relying on moderators to curate our platforms, the hate motivated Russells of the world are empowered to say their bullshit, they receive relatively little resistance, and the relative permissiveness attracts more Russells. the people who want a nice place to hang out online go elsewhere, the concentration of Russells rises, and we're left with a platform that is actively hostile towards jewish people. oops!

if you are part of a focused, highly engaged community, maybe your solution works, but most online spaces are not focused and highly engaged. i agree generally that echo chambers are problematic, but i think on the whole that federation does more to mitigate that than large, algorithmically segregated platforms. i don't really agree that banning or blocking don't or won't play a role in ensuring that social spaces are friendly and enjoyable to be in, especially for marginalized people groups. if you let people say the n word on your platform, and don't do anything about the people who do, don't expect many people of color to want to be where you are. its just not fun to hang out with bigots if you're the one they're targeting, and that will affect the culture of your platform.

Content moderation should not take the form of banning or blocking speech outright, and should not be conducted unilaterally. Moderation should be community driven and transparent. Anyone should be able to see what was hidden, so they can determine for themselves if the censorship was reasonable and appropriate. The content should remain readily available, perhaps “hidden” behind an unexpanded tab rather than deleted entirely.

i think it really isn't so simple. some people are more invested in a community than others, lots of people are just... not interested in auditing their moderators. generally i think its a good idea to have it be transparent, certainly better than what any major social media platforms do, but at a certain point it does just come down to trust. for example, i agree broadly with the code of conduct for Beehaw, that's why i have an account there. i'm generally uninterested in trying to verbally spar with bigots, i don't want to engage deeply with the moderation of the platform, i have no interest in litigating what is and isn't proper conduct on the site, that's not what i use the internet for. lots of people who are the target of bigotry and hatred just... don't really want to constantly be on guard for that shit. they want a space where they can exist without being confronted with cruelty. i wouldn't want to be on the kind of platform you're describing, sorry.