this post was submitted on 03 Aug 2023
96 points (73.3% liked)

Technology

59436 readers
3020 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

The student ended up with a fairer complexion, dark blonde hair and blue eyes after her Playground AI request

all 26 comments
sorted by: hot top controversial new old
[–] jeena@jemmy.jeena.net 49 points 1 year ago (1 children)

To be fair, I used a Chinese AI picture generator app with my face and it made it more Asian looking. It's obvious that each software has biases towards the people who made and trained it. It's not good, but it's expected and happening everywhere.

[–] EatMyDick@lemmy.world 38 points 1 year ago (1 children)

"Asian MIT grad who knows exactly what she is doing, pretends to be shocked after intentionally triggering industry known bias that are already acknowledged and being worked on"

This is just a student manufacturing controversy ensuring she has a great talking piece at her interviews.

[–] fubbernuckin@lemmy.world 2 points 1 year ago (1 children)

While it's definitely a predictable outcome, it's really not fair to assume that's what her motive were.

[–] EatMyDick@lemmy.world 1 points 1 year ago

Yeah I'm sure this is the first time this MIT computer science grad had noooooo idea what she was doing.

[–] Blamemeta@lemmy.world 34 points 1 year ago

You have to pick the model that fits you, and specify what you want. This is how ai works mathematically, it trends towards one image,

Its like buying foundation randomly and being upset it doesn't fit your skin tone perfectly.

[–] luthis@lemmy.nz 15 points 1 year ago (1 children)

There's been huge discussion on this already: https://lemmy.nz/post/684888

Sorry, not sure how to ! post so it opens in your instance.

TL;DR

Any result is going to be biased. If it generated a crab wearing liederhosen, it's obviously a bias towards crabs. You can't not have a biased output because the prompting is controlling the bias. There's no cause for concern here. The model is outputting by default the general trend of the data it was trained with. If it was trained with crabs, it would be generating crab-like images.

You can fix bias with LoRAs and good prompting.

[–] cerevant@lemmy.world 25 points 1 year ago (2 children)

The bias isn’t in the software, it is in the data. The stock photos of professional women that were fed in were white.

That doesn’t say anything about the AI, but rather the community that created those biases.

[–] FaceDeer@kbin.social 7 points 1 year ago

I recall a somewhat similar incident when I was showing an in-law of mine how Stable Diffusion worked a while back. She's of Indian descent, and she asked Stable Diffusion to generate a picture of an Indian woman. All of the women it generated had Bindis and other "traditional" Indian cultural garb on, and she was initially kind of annoyed by that. But I explained that that's because most of the photos of women in the training set that were explicitly tagged as Indian were dressed that way, whereas the rest of the Indian women in the training set probably weren't explicitly tagged. They were just women.

It was kind of interesting trying to figure out which option was more biased. Realizing that there was an understandable reason behind that helped ease her annoyance.

[–] luthis@lemmy.nz 3 points 1 year ago (1 children)

Yes, but they trained on easily accessible data in large amounts. Which actually says that stock photo websites are the biased ones there.

No model can be trained on an equal amount of diverse data for everyone, and it's not supposed to anyway. I bet it was hardly if at all trained on Mongolian goat herders, but you could hardly say it's biased against them, just that there wasn't an easily accessible large amount of pictures of them.

[–] cerevant@lemmy.world 2 points 1 year ago

That’s my point. The AI isn’t an independent subject to be criticized, it is a cultural mirror.

[–] monsterlynn@kbin.social 6 points 1 year ago (1 children)

@stopthatgirl7 She also ended up with slightly frizzy hair compared to her relatively straight hair.

All around messed up and creepy.

[–] WoahWoah@lemmy.world 3 points 1 year ago

That's what I noticed. It made her hair arguably LESS "professional."

[–] EncryptKeeper@lemmy.world 4 points 1 year ago* (last edited 1 year ago)

I mean wouldn’t this just be due to like the sheer number of BS “female professional” stock photos used on the websites of call centers globally, that the AI ingested? Said “professional white person” photos being used especially in non-western websites in order to gain legitimacy in the west?

Like given what little I know about how AI ingests and spits out data, it might be correlating the buzzword “professional”, and stock photos of white people that were ingested from Asian websites. It might be “wrong” but the AI doesn’t attempt to be “right” it’s just trying to give you what you expect based on the data it has.

[–] tictac2@lemmy.world 2 points 1 year ago

Well we all know that Asians are far from professional

/s