this post was submitted on 01 Aug 2023
528 points (82.5% liked)
Technology
60112 readers
1988 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
If it's stable diffusion img2img, then totally, this is a misunderstanding of how that works. It usually only looks at things like the borders or depth. The text based prompt that the user provides is otherwise everything.
That said, these kinds of AI are absolutely still biased. If you tell the AI to generate a photo of a professor, it will likely generate an old white dude 90% of the time. The models are very biased by their training data, which often reflects society's biases (though really more a subset of society that created whatever training data the model used).
Some AI actually does try to counter bias a bit by injecting details to your prompt if you don't mention them. Eg, if you just say "photo of a professor", it might randomly change your prompt to "photo of a female professor" or "photo of a black professor", which I think is a great way to tackle this bias. I'm not sure how widespread this approach is or how effective this prompt manipulation is.