this post was submitted on 12 Jun 2023
15 points (100.0% liked)
Technology
37727 readers
614 users here now
A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.
Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.
Subcommunities on Beehaw:
This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
The ways to control for algorithmic bias are typically through additional human developed layers to counteract bias present when you ingest large datasets to train. But that's extremely work intensive. I've seen some interesting hypotheticals where algorithms designed specifically to identify bias can be used to tune layers with custom weighting to attempt to pull bias back down to acceptable levels, but even then we'll probably need to watch how this changes language about groups for which there is bias.
I think the trouble with human oversight is that it’s still going to keep whatever bias the overseer has.
AI is programmed by humans or trained on human data. Either we're dealing in extremes where it's impossible to not have bias (which is important framing to measure bias) or we're talking about how to minimize bias not make it perfect.