this post was submitted on 31 Aug 2023
564 points (98.3% liked)

Technology

59402 readers
3434 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

I'm rather curious to see how the EU's privacy laws are going to handle this.

(Original article is from Fortune, but Yahoo Finance doesn't have a paywall)

you are viewing a single comment's thread
view the rest of the comments
[–] Veraticus@lib.lgbt -5 points 1 year ago (1 children)

You can indeed tell if something is true or untrue. You might be wrong, but that is quite different -- you can have internal knowledge that is right or wrong. The very word "misremembered" implies that you did (or even could) know it properly.

LLMs do not retain facts and they can and frequently do get information wrong.

Here's a good test. Choose a video game or TV show you know really well -- something a little older and somewhat complicated. Ask ChatGPT about specific plot points in the video game.

As an example, I know Final Fantasy 14 extremely well and have played it a long time. ChatGPT will confidently state facts about the game that are entirely and totally incorrect: it confuses characters, it moves plot points around. This is because it chooses what is likely to say, not what is actually correct. Indeed, it has no ability to know what is correct at all.

AI is not a simulation of human neural networks. It uses the concept of mathematical neural networks, but it is a word model, nothing more.

[–] fsmacolyte@lemmy.world 2 points 1 year ago* (last edited 1 year ago)

The free version gets things wrong a bunch. It's impressive how good GPT-4 is. Human brains are still a million times better in almost every way (they cost a few dollars of energy to operate per day, for example) but it's really hard to believe how capable the state of the art of LLMs is until you've tried it.

You're right about one thing though. Humans are able to know things, and to know when we don't know things. Current LLMs (transformer-based architecture) simply can't do that yet.