this post was submitted on 23 Jan 2025
241 points (96.9% liked)
Technology
61081 readers
2471 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I mean, I'd argue they're highly complex I/O mechanisms, which is how you get weird hallucinations that developers can't easily explain.
But expecting cognition out of a graph is like demanding novelty out of a plinko machine. Not only do you get out what you get in, but you get a very statistically well-determined output. That's the whole point. The LLM isn't supposed to be doing high level cognitive extrapolations. It's supposed to be doing statistical aggregates on word association using a natural language schema.
Hallucinations imply a sense of "normal" or "reasonable" or at least "real" in the first place. LLMs have no concept of that.
I prefer to phrase it as "you get made-up results that are less convincingly made-up than the test"