this post was submitted on 20 Sep 2023
221 points (89.6% liked)

Comics

5924 readers
6 users here now

This is a community for everything comics related! A place for all comics fans.

Rules:

1- Do not violate lemmy.ml site-wide rules

2- Be civil.

3- If you are going to post NSFW content that doesn't violate the lemmy.ml site-wide rules, please mark it as NSFW and add a content warning (CW). This includes content that shows the killing of people and or animals, gore, content that talks about suicide or shows suicide, content that talks about sexual assault, etc. Please use your best judgement. We want to keep this space safe for all our comic lovers.

4- No Zionism or Hasbara apologia of any kind. We stand with Palestine 🇵🇸 . Zionists will be banned on sight.

5- The moderation team reserves the right to remove any post or comments that it deems a necessary for the well-being and safety of the members of this community, and same goes with temporarily or permanently banning any user.

Guidelines:

founded 4 years ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] Rivalarrival@lemmy.today 1 points 1 year ago (1 children)

No, I did not read the rest of it. Again, the premise of your argument was a strawman about death threats, and I refuse to engage with that premise. Demonstrate comprehension of that distinction, or find someone else to argue with.

[–] ondoyant@beehaw.org 1 points 1 year ago (1 children)

read the rest of it. or don't, whatever. the majority of the post did conform to your specifications. i object to your framing, i just don't think its settled ground that these things would be handled appropriately by a court of law, or that they are being handled in the way you have previously described. but i would also just generally recommend reading what somebody says before deciding what their argument is? even if just for curiosity's sake. that's a weird way of engaging with somebody.

[–] Rivalarrival@lemmy.today 1 points 1 year ago (1 children)

I'll read it eventually, but I won't engage with it. This topic is too sensitive and contentious to allow that sort of misconception to creep in. I am not interested in derailing a discussion on censorship by conflating speech with violence.

would also just generally recommend reading what somebody says before deciding what their argument is?

Apply that argument to someone who has been censored/silenced, and you might begin to understand why I oppose it.

[–] ondoyant@beehaw.org 1 points 1 year ago (1 children)

Apply that argument to someone who has been censored/silenced, and you might begin to understand why I oppose it.

ugh. i know you think that's clever, but its just confusing. what would they be judged by anything other than the content of their arguments? that's why people get banned, its because of what they're saying! i don't hold the position that people should be banned or moderated for something other than for their behavior, that wouldn't make sense. in any case, i'm not conflating speech with violence, i'm not misconceiving anything. i disagree with the premise that speech and violence are discrete from one another. they operate on a continuum. there is speech that is more violent than other speech, and we should have tools for dealing with the things that can lead to but are not in and of themselves violence. content moderation is one of those tools.

[–] Rivalarrival@lemmy.today 1 points 1 year ago (1 children)

in any case, i'm not conflating speech with violence, i'm not misconceiving anything. i disagree with the premise that speech and violence are discrete from one another.

Those two sentences are contradictory. There is no such thing as lawful, violent speech, nor unlawful, non-violent speech. No violent speech is protected; no non-violent speech is prohibited. We don't have an authority to tell us exactly where that line is. We do have the consensus of society in general, who we can consult - formally or informally - on whether that line has been crossed.

"Content moderation" replaces that societal consensus with authoritarian opinion. When you decide I don't need to hear from Redneck Russell about how he hates Jews, I am harmed. I don't get to challenge Russell's opinions, or argue with him, or rally people against him. In silencing him, you've taken away my ability to engage him. He still gets to recruit his disciples into his own little spaces out of your control. If I try to engage him there, he merely silences me, censors me. His acolytes never hear a dissenting opinion against him, because he, and you, have decided I don't need to engage him.

They occasionally come out of their little holes, spout their nonsense in your forums,, and proudly tell their compatriots that you banned them from talking to your community members because you couldn't engage them.

Content moderation should not take the form of banning or blocking speech outright, and should not be conducted unilaterally. Moderation should be community driven and transparent. Anyone should be able to see what was hidden, so they can determine for themselves if the censorship was reasonable and appropriate. The content should remain readily available, perhaps "hidden" behind an unexpanded tab rather than deleted entirely.

[–] ondoyant@beehaw.org 1 points 1 year ago

Those two sentences are contradictory. There is no such thing as lawful, violent speech, nor unlawful, non-violent speech. No violent speech is protected; no non-violent speech is prohibited.

i've given several examples where that isn't as clear cut, but whatever. speech is a behavior, and can modulate how we act. if you tell people that a group of people is evil, and never say what to do about it, you still increase the likelihood that somebody will act on the belief that that group of people is evil. there are material consequences for speech between causing violence and not causing violence.

We don’t have an authority to tell us exactly where that line is. We do have the consensus of society in general, who we can consult - formally or informally - on whether that line has been crossed.

the barrier of lawfulness, violence, and all that are socially defined, yes, but if you concede that much, then there will be communities that define racism, bigotry, and other forms of inflammatory speech as violent, and decide that those things ought not to be in their social spaces. unless you're appealing to the group consensus of the largest possible group, there will be subcultures that disagree with each other on what does and doesn't constitute violent speech. if you're appealing to the legality of speech, you aren't appealing to group consensus, you're appealing to the government. so either we as autonomous communities ought to draw our own lines for what is and isn't violent speech ourselves (what i believe), or there is a precise legal definition we have to adhere to, given to us by the government. in reality, its both. there are firm lines of conduct that the government prohibits in theory (though i would dispute their efficacy), and there are communities that disagree on what the limit should be. i don't think that having codes of conduct in this way is necessarily authoritarian.

“Content moderation” replaces that societal consensus with authoritarian opinion. When you decide I don’t need to hear from Redneck Russell about how he hates Jews, I am harmed. I don’t get to challenge Russell’s opinions, or argue with him, or rally people against him. In silencing him, you’ve taken away my ability to engage him. He still gets to recruit his disciples into his own little spaces out of your control. If I try to engage him there, he merely silences me, censors me. His acolytes never hear a dissenting opinion against him, because he, and you, have decided I don’t need to engage him.

to be clear, i am here talking to you because i prefer the model that federated services use for moderating their communities, and believe that having tech companies be the sole arbiter of what is and isn't proper speech is a fundamentally flawed approach. that being said, the problem i have with your solution is one that's shared with a lot of community moderation on platforms. it relies on people being willing and able to confront and defuse bigotry on an individual level. i'm jewish. i don't want to hear what Redneck Russell has to say. i doubt that i could say anything to him to change his mind, and i don't want my internet experience to be saturated in Russells, for the basic reason that i want my time online to be relatively relaxing. people who are less attached to jewish identity are even less likely to engage with him, because it doesn't affect them personally, internet arguments are often unpleasant, and they also want their time online to be relatively relaxing. so how do things pan out if a community is only loosely engaged? well, if we aren't relying on moderators to curate our platforms, the hate motivated Russells of the world are empowered to say their bullshit, they receive relatively little resistance, and the relative permissiveness attracts more Russells. the people who want a nice place to hang out online go elsewhere, the concentration of Russells rises, and we're left with a platform that is actively hostile towards jewish people. oops!

if you are part of a focused, highly engaged community, maybe your solution works, but most online spaces are not focused and highly engaged. i agree generally that echo chambers are problematic, but i think on the whole that federation does more to mitigate that than large, algorithmically segregated platforms. i don't really agree that banning or blocking don't or won't play a role in ensuring that social spaces are friendly and enjoyable to be in, especially for marginalized people groups. if you let people say the n word on your platform, and don't do anything about the people who do, don't expect many people of color to want to be where you are. its just not fun to hang out with bigots if you're the one they're targeting, and that will affect the culture of your platform.

Content moderation should not take the form of banning or blocking speech outright, and should not be conducted unilaterally. Moderation should be community driven and transparent. Anyone should be able to see what was hidden, so they can determine for themselves if the censorship was reasonable and appropriate. The content should remain readily available, perhaps “hidden” behind an unexpanded tab rather than deleted entirely.

i think it really isn't so simple. some people are more invested in a community than others, lots of people are just... not interested in auditing their moderators. generally i think its a good idea to have it be transparent, certainly better than what any major social media platforms do, but at a certain point it does just come down to trust. for example, i agree broadly with the code of conduct for Beehaw, that's why i have an account there. i'm generally uninterested in trying to verbally spar with bigots, i don't want to engage deeply with the moderation of the platform, i have no interest in litigating what is and isn't proper conduct on the site, that's not what i use the internet for. lots of people who are the target of bigotry and hatred just... don't really want to constantly be on guard for that shit. they want a space where they can exist without being confronted with cruelty. i wouldn't want to be on the kind of platform you're describing, sorry.