this post was submitted on 27 Feb 2024
166 points (87.7% liked)
Technology
59436 readers
3000 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I have a suspicion that this is exactly what’s going on here and may be why past studies found no differences. AI is much better at quickly synthesizing complex patterns into coherent categories than humans are.
Also, 90% is not that good all things considered. The brain is almost certainly a complex mix of features that defy black and white categorization.
Hopefully we will be wise enough to not require trans people to prove their trans-ness scientifically. People have a right to do what they wish with their bodies and express their gender in a way that feels right to them, and should not be required to match some artificial physical diagnosis of what it means to be trans. Even if it turns out that most trans people do share certain brain structures or patterns. There will always be exceptions and that doesn’t mean we get to label someone’s identity as inauthentic.
Unlikely as it might be, maybe the 10% error rate is from gender queer people that haven't realized/faced it yet.
There are a lot of potential explanations. In essence they built a model to categorize brain features into male and female, and then tested this against their label of male or female on each brain. So this could result from problems with the model predictions—or just as easily from their “correct” labeling of each brain as male or female.
So a big question is how did they define male and female? By genetics? By reproductive anatomy? By self reported identity? This information was not in the article. All of these things are very likely correlated with things happening in the brain, but probably not perfectly. It’s worth noting that many definitions of sex do not consider gender identity at all—if such a definition was used, then a trans-man might be labeled female in their data, whether they have reckoned with their identity or not.
I looked into this, the study analyzed three pre-existing fMRI datasets.
I wasn't able to find any info on how these projects assessed sex/gender of participants.
Based on this, I’d assume they just used AGAB as that’s how medical professionals approach patients in their care.
Given any finite data set above a trivially small size/complexity, and an undefined set of criteria, the odds of meaningless patterns appearing are extremely high.
Machine learning algorithms are basically automated P-hackers when misused. Be skeptical of any conclusions drawn from ML that are not otherwise verifiable.