this post was submitted on 01 Aug 2023
527 points (82.4% liked)
Technology
59436 readers
2970 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
You could get a more controlled result if you use inpainting. I resized the image in an image editor first, because it gave me really strange results when I gave it the strange resolution of the image from the article. I masked out her face and had the model go ham on the rest of the image, while leaving the face and the hairline untouched.
After that I removed the mask, changed the denoising strength to 0.2, applied the "face fix", and this is the end result.
It's usable, but I think it's a weird use-case for these kinds of tools. Like yeah you can retouch photos if you're careful enough, but at the same time, wouldn't it be easier to just dress up and take some new photos? I dunno, the idea of using an AI generated profile image feels kind of sketchy to me, but then again we have had beautification filters available for ages - my work phone, a Google Pixel 6, comes with it built into the camera application. Every time the camera opens on my face I get this weird uncanny feeling.
Anyway. The article does touch upon a problem that definitely worries me too
I really hope no company would use an image model to analyse candidates profile photos, because as an ugly person that makes me want to scream. However, this has been a problem in the past, Amazon developed a tool for use by recruiters, which turned out to have a bias against women. I can easily see a "CV analysis tool" having a bias against people with names of non-European origin for example.
At this point I think it's impossible to put the genie back into the bottle, given the chance I definitely would, but I think all we can do now is ensure that we try and mitigate potential harm caused by these tools as much as possible.