this post was submitted on 02 Jul 2023
44 points (100.0% liked)
Beehaw Support
2797 readers
2 users here now
Support and meta community for Beehaw. Ask your questions about the community, technical issues, and other such things here.
A brief FAQ for lurkers and new users can be found here.
Our September 2024 financial update is here.
For a refresher on our philosophy, see also What is Beehaw?, The spirit of the rules, and Beehaw is a Community
This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
You're not wrong, but we don't have the tools to screen out bad actors and to moderate appropriate, professional discussions on complicated, nuanced topics with experts to help address what's signal and what's noise. This is simply not the venue for that.
I want to point out two things here:
I don't think you need experts in a given field to recognize whether a discussion is being conducted in a respectful and constructive way or not. The participants themselves are likely to tell you that through the report function.
Suggestion: For tools, would it be possible to set a discussion to be collapsed/hidden/flagged when in doubt? (like what Reddit does through downvotes, but via mod action) Then just let the participants continue if they wish, without disturbing the rest of the community.
That requires identifying that I lack some education in it, and while I'm all for it, after so many years there are likely many subjects that I think I have enough education to present a point of view about, even if I might be wrong. Googling each and every possible subject "just in case" before commenting, isn't practical.
Heated discussions get used politically all the time, and in this age of AI chatbots, professional troll farms, mis- and counter-informative spam, and such, they can even be faked precisely to get them out of rational discussions.
I don't think "heat" is a good indicator of whether a subject should be avoided or not; pretty much all the subjects for every community in here, are used in heated discussions somewhere else.
Wouldn't that leave only the most mundane, bland and minimum common consensus subjects? But if everyone knows everything already, with the same point of view, what's there left to talk about?
Suggestion: I think stating that it's a "non-exhaustive list of examples" would take care of that. Right now there is one example (phrenology), all I suggest is adding more examples.
(and anyway, Reddit's explicit list didn't stop them from banning me for talking about something only tangentially related to one of the points... ultimately, instance admins have all the power, even to edit user comments, Reddit did it)
That's okay that you don't think so, but we as a team absolutely do. It's part of the reason we've not created communities such as "mental health", "legal", and other places which are at risk of uninformed opinions causing serious and actual harm.
This functionality does not exist. Even if it did exist, I would not want it used for this purpose.
It sure does! There's an educational burden for you if you want to speak about subjects which have real world effects on others but not on you. If you don't do that research before asking questions, that's your fault and not ours - it's not difficult to ask yourself the question of whether something might negatively effect others or to at least do a cursory search on google, go to the library and find some reading, or otherwise receive some base level of education before discussion a charged question on the platform. You have a responsibility to your fellow humans to be educated in this manner before broaching a topic in a public space.
In the scope of the document linked, we're going to take into consideration the viewpoints of others. People who are sufferers of sexual abuse won't particularly like you going on amateurishly about whether you think there are real risks, however, so if you try to start a conversation about this you might find your content removed and if this becomes a repeat problem you may end up temporarily and eventually permanently banned.
I don't agree with this statement at all. If a discussion ultimately questions someone else's humanity, it's not a great subject to discuss when those people are present. Or if you do, and do so without considering the opinions and thoughts of this group or at the very least become educated on this issue, you should expect consequences to your speech - such as being insulted, having your comment removed, or being removed from your platform entirely.
I don't have the time or energy to build a list of all content in the world aimed at dehumanizing others. If you do, more power to you, I'd encourage you to make and maintain said list with my blessing.
You may not have created specific communities, but both mental health advice and legal commentary are being offered in the communities already created. Does this mean those contents should be avoided and/or reported? I hope not... they're actually interesting.
That gets us back to my initial question: if I speak about a subject that has direct effects on me, but only tangentially references effects on others but not on me... what's the stance on that?
Fun fact: my "sexualization of minors" ban on Reddit came from citing a book... so this doesn't seem like a safe recommendation... 😐
I got that, don't make people uncomfortable. I'm even fine with backing off when made aware of it, even if I actually have more than amateurish knowledge about a subject. Heck, I'm even fine with my expert knowledge getting removed (did that on Reddit myself already).
What I wouldn't like, is to get banned because someone felt uncomfortable and I wasn't made aware, or someone thought that someone might have felt uncomfortable by proxy, without a chance to fix it.
I agree with that.
My objection was that there are people "out there" who will use any subject to dehumanize others, even when the subjects themselves are not inherently dehumanizing and can otherwise be discussed with respect. Should we let third parties guide which subjects should be banned, just because someone might have seen them use it in a dehumanizing way?
There is also the matter of which people "are present", since the contents here are public and even federated, so technically "everyone is present".
Yeah... it's not an attractive task. I was thinking that since mods are going to see the content anyway, you could run it as a kind of FAQ, just add items to a list when you see them appear on the instance. Kind of "I'll know it when I see it, and now everyone else will too".
I wonder if a list could be extracted from the modlog... I'll look into that.
The best I can say to you at this point is that if you've received pushback in the past it's probably not meant for this site. I can't itemize everything for you. I understand you're neurodivergent and need a bit more clarity on what's acceptable and what's not but I don't have the time to build that list for you. Maybe just avoid any subject you have questions about or haven't seen others discussing to be safe
Make that neurodivergent, disabled, and abused for enough time to have received pushback on all subjects. At this point I feel like the only safe course of action is to shut up and disappear, since I'm definitely not going to go to "free speech" places to get more of the same. Anyway, sorry for dragging this out, and guess we'll just see how it goes.