Comics
This is a community for everything comics related! A place for all comics fans.
Rules:
1- Do not violate lemmy.ml site-wide rules
2- Be civil.
3- If you are going to post NSFW content that doesn't violate the lemmy.ml site-wide rules, please mark it as NSFW and add a content warning (CW). This includes content that shows the killing of people and or animals, gore, content that talks about suicide or shows suicide, content that talks about sexual assault, etc. Please use your best judgement. We want to keep this space safe for all our comic lovers.
4- No Zionism or Hasbara apologia of any kind. We stand with Palestine 🇵🇸 . Zionists will be banned on sight.
5- The moderation team reserves the right to remove any post or comments that it deems a necessary for the well-being and safety of the members of this community, and same goes with temporarily or permanently banning any user.
Guidelines:
- If possible, give us your sources.
- If possible, credit creators of each comics in the title or body of your post. If you are the creator, please credit yourself. A simple “- Me” would suffice.
- In general terms, write in body of your post as much information as possible (dates, creators, editors, links).
- If you found the image on the web, it is encouraged to put the direct link to the image in the ‘Link’ field when creating a post, instead of uploading the image to Lemmy. Direct links usually end in .jpg, .png, etc.
- One post by topic.
view the rest of the comments
Threats of violence are not social disputes.
The rest of your argument is predicated on this fallacy, so I will ignore it.
oh that's why you'll ignore it, huh?
Correct. I am not defending death threats or threats of violence in any way, and I will not allow you to portray me as doing so. Please confine your arguments to forms of speech that do not rise to the level of criminality.
Fascism arises when dissent is silenced. Death threats are not dissent.
that's the thing, we don't live in a world where death threats and threats of violence are being dealt with in the way you seem to think they are, and community tools like bans are sometimes the only recourse people have that isn't ruinously expensive, glacially slow, and uncertain to work.
but sure, lets say we aren't talking about explicit death threats or threats of violence. instead, they just... post the account information of queer tiktok creators, and spend most of their time calling queer people groomers and pedophiles. its not directly a threat of violence, but every time they post something, the accounts they post get harrassed by tons of anonymous followers, one of them figures out where they live, and then start bombarding a real human person with death threats. everybody doing the death threats is anonymous, there's no way for the legal system to touch them. what do we do? nothing? or somebody's whole online presence is talking about the great replacement, how the anglo-saxon race is being exterminated, and somewhere down the line we start seeing mass shooters pop up saying nearly the exact same thing in their manifestos. stochastic terrorism. using speech to motivate anonymous observers to take violent action, without calling for violence explicitly. should nothing be done about that? is that not concerning to you?
i think you have a very simplistic definition of what fascism is, and what can or cannot be defined as a threat of violence. there is nuance to what should and should not be considered hate speech, and if you're defending the institution of slavery, implying queer people are groomers, really doing any sort of bigotry, it can meaningfully cause harm to people even if it isn't in and of itself a threat of violence. what do we do then? either nothing or put them in jail? because i think that having more than one way of mediating and enacting punishment for misbehavior is a good thing. i think that being able to respond proportionately to assholes without waiting for them to reach the threshold of illegality is a more healthy way of maintaining a community than putting a firm barrier between "dissent" and "actual crime".
I am not interested in discussing death threats.
I will not discuss criminal speech, let alone defend it. I refuse to take the position you are attempting to assign to me. I do not accept your red herring and strawman arguments.
The overwhelming majority of bans, blocks, and other fascist, silencing behaviors are in response to non-criminal speech. Please confine your arguments to such speech.
right... did you read the rest of it? because i did make a relevant argument like right below that.
No, I did not read the rest of it. Again, the premise of your argument was a strawman about death threats, and I refuse to engage with that premise. Demonstrate comprehension of that distinction, or find someone else to argue with.
read the rest of it. or don't, whatever. the majority of the post did conform to your specifications. i object to your framing, i just don't think its settled ground that these things would be handled appropriately by a court of law, or that they are being handled in the way you have previously described. but i would also just generally recommend reading what somebody says before deciding what their argument is? even if just for curiosity's sake. that's a weird way of engaging with somebody.
I'll read it eventually, but I won't engage with it. This topic is too sensitive and contentious to allow that sort of misconception to creep in. I am not interested in derailing a discussion on censorship by conflating speech with violence.
Apply that argument to someone who has been censored/silenced, and you might begin to understand why I oppose it.
ugh. i know you think that's clever, but its just confusing. what would they be judged by anything other than the content of their arguments? that's why people get banned, its because of what they're saying! i don't hold the position that people should be banned or moderated for something other than for their behavior, that wouldn't make sense. in any case, i'm not conflating speech with violence, i'm not misconceiving anything. i disagree with the premise that speech and violence are discrete from one another. they operate on a continuum. there is speech that is more violent than other speech, and we should have tools for dealing with the things that can lead to but are not in and of themselves violence. content moderation is one of those tools.
Those two sentences are contradictory. There is no such thing as lawful, violent speech, nor unlawful, non-violent speech. No violent speech is protected; no non-violent speech is prohibited. We don't have an authority to tell us exactly where that line is. We do have the consensus of society in general, who we can consult - formally or informally - on whether that line has been crossed.
"Content moderation" replaces that societal consensus with authoritarian opinion. When you decide I don't need to hear from Redneck Russell about how he hates Jews, I am harmed. I don't get to challenge Russell's opinions, or argue with him, or rally people against him. In silencing him, you've taken away my ability to engage him. He still gets to recruit his disciples into his own little spaces out of your control. If I try to engage him there, he merely silences me, censors me. His acolytes never hear a dissenting opinion against him, because he, and you, have decided I don't need to engage him.
They occasionally come out of their little holes, spout their nonsense in your forums,, and proudly tell their compatriots that you banned them from talking to your community members because you couldn't engage them.
Content moderation should not take the form of banning or blocking speech outright, and should not be conducted unilaterally. Moderation should be community driven and transparent. Anyone should be able to see what was hidden, so they can determine for themselves if the censorship was reasonable and appropriate. The content should remain readily available, perhaps "hidden" behind an unexpanded tab rather than deleted entirely.
i've given several examples where that isn't as clear cut, but whatever. speech is a behavior, and can modulate how we act. if you tell people that a group of people is evil, and never say what to do about it, you still increase the likelihood that somebody will act on the belief that that group of people is evil. there are material consequences for speech between causing violence and not causing violence.
the barrier of lawfulness, violence, and all that are socially defined, yes, but if you concede that much, then there will be communities that define racism, bigotry, and other forms of inflammatory speech as violent, and decide that those things ought not to be in their social spaces. unless you're appealing to the group consensus of the largest possible group, there will be subcultures that disagree with each other on what does and doesn't constitute violent speech. if you're appealing to the legality of speech, you aren't appealing to group consensus, you're appealing to the government. so either we as autonomous communities ought to draw our own lines for what is and isn't violent speech ourselves (what i believe), or there is a precise legal definition we have to adhere to, given to us by the government. in reality, its both. there are firm lines of conduct that the government prohibits in theory (though i would dispute their efficacy), and there are communities that disagree on what the limit should be. i don't think that having codes of conduct in this way is necessarily authoritarian.
to be clear, i am here talking to you because i prefer the model that federated services use for moderating their communities, and believe that having tech companies be the sole arbiter of what is and isn't proper speech is a fundamentally flawed approach. that being said, the problem i have with your solution is one that's shared with a lot of community moderation on platforms. it relies on people being willing and able to confront and defuse bigotry on an individual level. i'm jewish. i don't want to hear what Redneck Russell has to say. i doubt that i could say anything to him to change his mind, and i don't want my internet experience to be saturated in Russells, for the basic reason that i want my time online to be relatively relaxing. people who are less attached to jewish identity are even less likely to engage with him, because it doesn't affect them personally, internet arguments are often unpleasant, and they also want their time online to be relatively relaxing. so how do things pan out if a community is only loosely engaged? well, if we aren't relying on moderators to curate our platforms, the hate motivated Russells of the world are empowered to say their bullshit, they receive relatively little resistance, and the relative permissiveness attracts more Russells. the people who want a nice place to hang out online go elsewhere, the concentration of Russells rises, and we're left with a platform that is actively hostile towards jewish people. oops!
if you are part of a focused, highly engaged community, maybe your solution works, but most online spaces are not focused and highly engaged. i agree generally that echo chambers are problematic, but i think on the whole that federation does more to mitigate that than large, algorithmically segregated platforms. i don't really agree that banning or blocking don't or won't play a role in ensuring that social spaces are friendly and enjoyable to be in, especially for marginalized people groups. if you let people say the n word on your platform, and don't do anything about the people who do, don't expect many people of color to want to be where you are. its just not fun to hang out with bigots if you're the one they're targeting, and that will affect the culture of your platform.
i think it really isn't so simple. some people are more invested in a community than others, lots of people are just... not interested in auditing their moderators. generally i think its a good idea to have it be transparent, certainly better than what any major social media platforms do, but at a certain point it does just come down to trust. for example, i agree broadly with the code of conduct for Beehaw, that's why i have an account there. i'm generally uninterested in trying to verbally spar with bigots, i don't want to engage deeply with the moderation of the platform, i have no interest in litigating what is and isn't proper conduct on the site, that's not what i use the internet for. lots of people who are the target of bigotry and hatred just... don't really want to constantly be on guard for that shit. they want a space where they can exist without being confronted with cruelty. i wouldn't want to be on the kind of platform you're describing, sorry.
in any case, i think i'm basically done with you. the world isn't made of neat little blocks you can arrange to your liking. the barrier between criminal and non-criminal speech is socially constructed, and the conduct of individuals doesn't go from perfectly fine to absolutely unacceptable in an instant. its more nuanced than that, and the way we interact with each other should reflect that nuance. like it or not, we have to be the ones to determine what is and is not a threat, it cannot be deferred to an authority unquestioningly.