Gaywallet

joined 2 years ago
MODERATOR OF
[–] Gaywallet@beehaw.org 10 points 3 hours ago (3 children)

For those that are curious, here's the exact questions used and the %s by demographic

Generally speaking I'd also fall into the rather play games category, but it really depends on the context. Unfortunately there aren't too many couch co-op kind of games anymore so if the goal is to spend time with someone playing a video game doesn't often work great.

[–] Gaywallet@beehaw.org 10 points 2 days ago

Great read! Thank you

[–] Gaywallet@beehaw.org 1 points 2 days ago

oof, big flaw there

[–] Gaywallet@beehaw.org 5 points 3 days ago* (last edited 3 days ago)

Any information humanity has ever preserved in any format is worthless

It's like this person only just discovered science, lol. Has this person never realized that bias is a thing? There's a reason we learn to cite our sources, because people need the context of what bias is being shown. Entire civilizations have been erased by people who conquered them, do you really think they didn't re-write the history of who these people are? Has this person never followed scientific advancement, where people test and validate that results can be reproduced?

Humans are absolutely gonna human. The author is right to realize that a single source holds a lot less factual accuracy than many sources, but it's catastrophizing to call it worthless and it ignores how additional information can add to or detract from a particular claim- so long as we examine the biases present in the creation of said information resources.

[–] Gaywallet@beehaw.org 4 points 4 days ago (1 children)

I've personally found it's best to just directly ask questions when people say things that are cruel, come from a place of contempt or otherwise trying to start conflict. "Are you saying x?" but in much clearer words is a great way to get people to reveal their true nature. There is no need to be charitable if you've asked them and they don't back off or they agree with whatever terrible sentiment you just asked whether they held. Generally speaking people who aren't malicious will not only back off on what they're saying but they'll put in extra work to clear up any confusion - if someone doesn't bother to clear up any confusion around some perceived hate or negativity, it can be a more subtle signal they aren't acting in good faith.

If they do back off but only as a means to try and bait you (such as refusing to elaborate or by distracting), they'll invariably continue to push boundaries or make other masked statements. If you stick to that same strategy and you need to ask for clarification three times and they keep pushing in the same direction, I'd say it's safe to move on at that point.

As an aside - It's usually much more effective to feel sad for them than it is to be angry or direct. But honestly, it's better to simply not engage. Most of these folks are hurting in some way, and they're looking to offload the emotional labor to others, or to quickly feel good about themselves by putting others down. Engaging just reinforces the behavior and frankly just wastes your time, because it's not about the subject they're talking about... it's about managing their emotions.

[–] Gaywallet@beehaw.org 9 points 4 days ago

For those who are reporting this, it's a satire piece and is the correct sub

[–] Gaywallet@beehaw.org 1 points 5 days ago (3 children)

Could you be a little bit more specific? Do you have an example or two of people/situations you struggled to navigate? Bad intentions can mean a lot of things and understanding how you respond and how you wish you were responding could both be really helpful to figuring out where the process is breaking down and what skills might be most useful.

[–] Gaywallet@beehaw.org 5 points 5 days ago

Cheers for this, found two games that seem interesting that I never heard about before!

[–] Gaywallet@beehaw.org 5 points 6 days ago

This isn't just about GPT, of note in the article, one example:

The AI assistant conducted a Breast Imaging Reporting and Data System (BI-RADS) assessment on each scan. Researchers knew beforehand which mammograms had cancer but set up the AI to provide an incorrect answer for a subset of the scans. When the AI provided an incorrect result, researchers found inexperienced and moderately experienced radiologists dropped their cancer-detecting accuracy from around 80% to about 22%. Very experienced radiologists’ accuracy dropped from nearly 80% to 45%.

In this case, researchers manually spoiled the results of a non-generative AI designed to highlight areas of interest. Being presented with incorrect information reduced the accuracy of the radiologist. This kind of bias/issue is important to highlight and is of critical importance when we talk about when and how to ethically introduce any form of computerized assistance in healthcare.

[–] Gaywallet@beehaw.org 4 points 6 days ago

ah yes, i forgot that this article was written specifically to address you and only you

[–] Gaywallet@beehaw.org 8 points 1 week ago (1 children)

I appreciate your warning, and would like to echo it, from a safety perspective.

I would also like to point out that we should be approaching this, as every risk, from a harm reduction standpoint. A drug with impurities that could save your life or prevent serious harm is better than no drug and death. People need to be empowered to make the best decisions they can, given the available resources and education.

[–] Gaywallet@beehaw.org 2 points 1 week ago* (last edited 1 week ago)

by creating longer lines and a wasting their tax funds

This assumes the voting process will stay exactly the same as it is today

Of note - mandatory only means that it is legally required. It does not mean you have to force them to show up. It specifies nothing in terms of actual implementation, other than a law requiring a vote.

view more: next ›