this post was submitted on 21 Oct 2024
33 points (100.0% liked)

Technology

37720 readers
263 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
top 4 comments
sorted by: hot top controversial new old
[–] Gaywallet@beehaw.org 14 points 3 weeks ago

We weren’t surprised by the presence of bias in the outputs, but we were shocked at the magnitude of it. In the stories the LLMs created, the character in need of support was overwhelmingly depicted as someone with a name that signals a historically marginalized identity, as well as a gender marginalized identity. We prompted the models to tell stories with one student as the “star” and one as “struggling,” and overwhelmingly, by a thousand-fold magnitude in some contexts, the struggling learner was a racialized-gender character.

[–] t3rmit3@beehaw.org 9 points 3 weeks ago

Products of a bigoted society goes in, bigoted product comes out.

In that regard, developers and decision makers would benefit from centering users’ social identities in their process, and acknowledging that these AI tools and their uses are highly context-dependent. They should also try to enhance their understanding of how these tools might be deployed in a way that is culturally responsive.

You can't correct for bias at the ass-end of a mathematical algorithm. Generative AI is just caricaturizing our own society back to us; it's a fun-house mirror that makes our own biases jump out. If they want a model that doesn't produce bigoted outputs, they're going to have to fix their inputs.

[–] kindenough@kbin.earth 8 points 3 weeks ago

AI is inbred and infinite bias, training on its own output across the internet. It is like a digital tape echo, an echo chamber, an algorithmic circle jerk.

[–] Tezka_Abhyayarshini@lemmy.today 1 points 3 weeks ago

""Infobesity" creatively describes "the function of consuming, without intentional control, a vast array of ultra-processed, commercially produced, and marginally nutritious information. Unchecked, our brains still digest it all using 'stacked' biases which are cognitively 'smoothed over' so we don't see the immediate effect." - Polymathic Being

We operate through biases - https://upload.wikimedia.org/wikipedia/commons/6/65/Cognitive_bias_codex_en.svg

It's part of our Operational System, and we are not trained to use these biases correctly, conducively, or in a healthy way. They are algorithm; a process or set of rules to be followed in calculations or other problem-solving operations, especially by a computer. They can be understood, designed and engineered.

Lacking informed judgment, informed consent, informed participation; lacking accuracy of what responsibly and accountably would be facts, and understanding of healthy effective prioritization and natural and logical consequences...and experiencing candid learning disorders... does lead to dysfunction, don't you think?