this post was submitted on 02 Aug 2023
359 points (94.1% liked)

Technology

60123 readers
2774 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 2 years ago
MODERATORS
 

Tech experts are starting to doubt that ChatGPT and A.I. ‘hallucinations’ will ever go away: ‘This isn’t fixable’::Experts are starting to doubt it, and even OpenAI CEO Sam Altman is a bit stumped.

you are viewing a single comment's thread
view the rest of the comments
[–] Mirodir@lemmy.fmhy.net 3 points 1 year ago (1 children)

It doesn't have the ability to just look up anything from its training data, that stuff is encoded in its parameters. Still, the input has to be encoded in a way that causes the correct "chain reaction" of excited/not excited neurons.

Beyond that, it's not just a carbon copy from what was in the training either because you can tell it what variable names to use, which order to do things in, change some details, etc. If it was simply a lookup that wouldn't be possible. The training made it able to generalize what it learned to some extent.

[–] tryptaminev@feddit.de 5 points 1 year ago

Yes, but it doesnt do so because it understands what a variable is, it does so because it has statistics as to where variables belong most likely.

In a way it is like the guy that won the french scrabble championship without speaking a single word of french, by learning the words in the dictionary.