this post was submitted on 03 Aug 2023
225 points (94.8% liked)

Technology

59472 readers
5445 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

Content moderators who worked on ChatGPT say they were traumatized by reviewing graphic content: 'It has destroyed me completely.'::Moderators told The Guardian that the content they reviewed depicted graphic scenes of violence, child abuse, bestiality, murder, and sexual abuse.

you are viewing a single comment's thread
view the rest of the comments
[–] uriel238@lemmy.blahaj.zone 38 points 1 year ago* (last edited 1 year ago) (2 children)

This came up years ago when Facebook had its hearings about the difficulties of content moderation. Through hundreds of billions of pieces of content you're going to end up with a millions of NSFL BLITs. Of those even if .1% requires a human to determine whether or not it's unsafe, thats still a thosand moderators who've had their day ruined.

So apparently the response so far has been to outsource to undeveloped countries to ruin their lives.

[–] hglman@lemmy.ml 9 points 1 year ago

Like factory work before it.

[–] Pyr_Pressure@lemmy.ca 6 points 1 year ago (2 children)

Wonder if this would be something AI could eventually be trained to filter.

Hopefully it wouldn't make the AI hate humanity enough to evolve and destroy us though, thinking we are all perverts and sadists.

[–] uriel238@lemmy.blahaj.zone 2 points 1 year ago (1 children)

I doubt it. Our own sense of disgust is protective, to keep us from getting poisoned or infected with a contagious pathogen, or in the case of seeing violence, to keep us safe from that very same threat.

Even if we instil AI with survival objectives it'll learn to avoid things that are dangerous to it, while still being able to operate, say, our sewage and waste disposal systems without having a visceral response to the material being processed.

That doesn't fully make us safe from AI deciding we're too degenerate to live. An interesting notion comes up by recent news of an Air Force general saying USAF AI is trained on Judeo Christian values. And while that doesn't mean anything, I could see an AI-driven autonomous weapon (or a commander-AI that controlled and organized an immense army of murder-drones) being trained to assess humans based on their history of behaviors, destroying sinners or degerates or perverts or whatever.

Given we humans are a vengeful lot (Hammurabi's code an eye for an eye was to set an upper limit on retribution, claiming an eye rather than killing their family over the incident) it would be very easy to set judge bots to err on the side of assuming guilt, necessating punitive action.

AIs going renegade can always be attributed to poor programming.

I reckon we should work out how to feed the content into a brain in a jar, and then measure the disgust - might need a few brains to ensure there is a consensus.

[–] inso@lemmy.sdf.org 2 points 1 year ago

There's actually a very good scifi story idea here