this post was submitted on 01 Aug 2023
528 points (82.5% liked)

Technology

60112 readers
1988 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 2 years ago
MODERATORS
 

An Asian MIT student asked AI to turn an image of her into a professional headshot. It made her white with lighter skin and blue eyes.::Rona Wang, a 24-year-old MIT student, was experimenting with the AI image creator Playground AI to create a professional LinkedIn photo.

you are viewing a single comment's thread
view the rest of the comments
[–] GenderNeutralBro@lemmy.sdf.org 68 points 1 year ago (2 children)

This is not surprising if you follow the tech, but I think the signal boost from articles like this is important because there are constantly new people just learning about how AI works, and it's very very important to understand the bias embedded into them.

It's also worth actually learning how to use them, too. People expect them to be magic, it seems. They are not magic.

If you're going to try something like this, you should describe yourself as clearly as possible. Describe your eye color, hair color/length/style, age, expression, angle, and obviously race. Basically, describe any feature you want it to retain.

I have not used the specific program mentioned in the article, but the ones I have used simply do not work the way she's trying to use them. The phrase she used, "the girl from the original photo", would have no meaning in Stable Diffusion, for example (which I'd bet Playground AI is based on, though they don't specify). The img2img function makes a new image, with the original as a starting point. It does NOT analyze the content of the original or attempt to retain any features not included in the prompt. There's no connection between the prompt and the input image, so "the girl from the original photo" is garbage input. Garbage in, garbage out.

There are special-purpose programs designed for exactly the task of making photos look professional, which presumably go to the trouble to analyze the original, guess these things, and pass those through to the generator to retain the features. (I haven't tried them, personally, so perhaps I'm giving them too much credit...)

[–] CoderKat@lemm.ee 23 points 1 year ago

If it's stable diffusion img2img, then totally, this is a misunderstanding of how that works. It usually only looks at things like the borders or depth. The text based prompt that the user provides is otherwise everything.

That said, these kinds of AI are absolutely still biased. If you tell the AI to generate a photo of a professor, it will likely generate an old white dude 90% of the time. The models are very biased by their training data, which often reflects society's biases (though really more a subset of society that created whatever training data the model used).

Some AI actually does try to counter bias a bit by injecting details to your prompt if you don't mention them. Eg, if you just say "photo of a professor", it might randomly change your prompt to "photo of a female professor" or "photo of a black professor", which I think is a great way to tackle this bias. I'm not sure how widespread this approach is or how effective this prompt manipulation is.

[–] Blackmist@feddit.uk 3 points 1 year ago

I've taken a look at the website for the one she used and it looks like a cheap crap toy. It's free, which is the first clue that it's not going to be great.

Not a million miles from the old "photo improvement" things that just run a bunch of simple filters and make over-processed HDR crap.