this post was submitted on 23 Jan 2025
241 points (96.9% liked)

Technology

61081 readers
2471 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

LLMs performed best on questions related to legal systems and social complexity, but they struggled significantly with topics such as discrimination and social mobility.

“The main takeaway from this study is that LLMs, while impressive, still lack the depth of understanding required for advanced history,” said del Rio-Chanona. “They’re great for basic facts, but when it comes to more nuanced, PhD-level historical inquiry, they’re not yet up to the task.”

Among the tested models, GPT-4 Turbo ranked highest with 46% accuracy, while Llama-3.1-8B scored the lowest at 33.6%.

you are viewing a single comment's thread
view the rest of the comments
[–] UnderpantsWeevil@lemmy.world 35 points 3 days ago (1 children)

They’re really just basic input output mechanisms.

I mean, I'd argue they're highly complex I/O mechanisms, which is how you get weird hallucinations that developers can't easily explain.

But expecting cognition out of a graph is like demanding novelty out of a plinko machine. Not only do you get out what you get in, but you get a very statistically well-determined output. That's the whole point. The LLM isn't supposed to be doing high level cognitive extrapolations. It's supposed to be doing statistical aggregates on word association using a natural language schema.

[–] lennivelkant@discuss.tchncs.de 12 points 3 days ago

Hallucinations imply a sense of "normal" or "reasonable" or at least "real" in the first place. LLMs have no concept of that.

I prefer to phrase it as "you get made-up results that are less convincingly made-up than the test"