this post was submitted on 23 Jan 2025
241 points (96.9% liked)

Technology

61081 readers
2462 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

LLMs performed best on questions related to legal systems and social complexity, but they struggled significantly with topics such as discrimination and social mobility.

“The main takeaway from this study is that LLMs, while impressive, still lack the depth of understanding required for advanced history,” said del Rio-Chanona. “They’re great for basic facts, but when it comes to more nuanced, PhD-level historical inquiry, they’re not yet up to the task.”

Among the tested models, GPT-4 Turbo ranked highest with 46% accuracy, while Llama-3.1-8B scored the lowest at 33.6%.

you are viewing a single comment's thread
view the rest of the comments
[–] banadushi@sh.itjust.works -2 points 2 days ago (1 children)

Bullshit, they are prediction engines biased from their input. Saying that no one understands what they do is a gross simplification. Do you understand how encoders work? Great! Then there is a common vocabulay! Now we move on to a probability matrix given those inputs, and we get the out put token witch circles through the input layer.

[–] QuarterSwede@lemmy.world 1 points 2 days ago

gross simplification

That was the point. Not everything needs to be literal.