this post was submitted on 23 Jan 2025
241 points (96.9% liked)

Technology

61081 readers
2471 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

LLMs performed best on questions related to legal systems and social complexity, but they struggled significantly with topics such as discrimination and social mobility.

“The main takeaway from this study is that LLMs, while impressive, still lack the depth of understanding required for advanced history,” said del Rio-Chanona. “They’re great for basic facts, but when it comes to more nuanced, PhD-level historical inquiry, they’re not yet up to the task.”

Among the tested models, GPT-4 Turbo ranked highest with 46% accuracy, while Llama-3.1-8B scored the lowest at 33.6%.

you are viewing a single comment's thread
view the rest of the comments
[–] QuarterSwede@lemmy.world 90 points 3 days ago* (last edited 3 days ago) (6 children)

Ugh. No one in the mainstream understands WHAT LLMs are and do. They’re really just basic input output mechanisms. They don’t understand anything. Garbage in, garbage out as it were.

[–] drosophila@lemmy.blahaj.zone 16 points 3 days ago* (last edited 3 days ago)

Specifically they are completely incapable of unifying information into a self consistent model.

To use an analogy you see a shadow and know its being cast by some object with a definite shape, even if you can't be sure what that shape is. An LLM sees a shadow and its idea of what's casting it is as fuzzy and mutable as the shadow itself.

Funnily enough old school AI from the 70s, like logic engines, possessed a super-human ability for logical self consistancy. A human can hold contradictory beliefs without realizing it, a logic engine is incapable of self-contradiction once all of the facts in its database have been collated. (This is where the SciFi idea of robots like HAL-9000 and Data from Star Trek come from.) However this perfect reasoning ability left logic engines completely unable to deal with contradictory or ambiguous information, as well as logical paradoxes. They were also severely limited by the fact that practically everything they knew had to be explicitly programmed into them. So if you wanted one to be able to hold a conversion in plain English you would have to enter all kinds of information that we know implicitly, like the fact that water makes things wet or that most, but not all, people have two legs. A basically impossible task.

With the rise of machine learning and large artificial neural networks we solved the problem of dealing with implicit, ambiguous, and paradoxical information but in the process completely removed the ability to logically reason.

[–] UnderpantsWeevil@lemmy.world 35 points 3 days ago (1 children)

They’re really just basic input output mechanisms.

I mean, I'd argue they're highly complex I/O mechanisms, which is how you get weird hallucinations that developers can't easily explain.

But expecting cognition out of a graph is like demanding novelty out of a plinko machine. Not only do you get out what you get in, but you get a very statistically well-determined output. That's the whole point. The LLM isn't supposed to be doing high level cognitive extrapolations. It's supposed to be doing statistical aggregates on word association using a natural language schema.

[–] lennivelkant@discuss.tchncs.de 12 points 3 days ago

Hallucinations imply a sense of "normal" or "reasonable" or at least "real" in the first place. LLMs have no concept of that.

I prefer to phrase it as "you get made-up results that are less convincingly made-up than the test"

[–] spankmonkey@lemmy.world 7 points 3 days ago (1 children)

That is accurate, but people who design and distribute the LLMs refer to the process as machine learning and use terms like hallucinations which is the primary cause of the confusion.

[–] SinningStromgald@lemmy.world 7 points 3 days ago (1 children)

I think the problem is the use of the term AI. Regular Joe Schmo hears/sees AI and thinks Data from ST:NG or Cylons from Battlestar Galactica and not glorified search engine chatbots. But AI sounds cooler than LLM so they use AI.

[–] Grimy@lemmy.world 3 points 3 days ago

The term is fine. Your examples are very selective. I doubt Joe Schmo thought the aimbots in CoD were truly intelligent when he referred to them as AI.

[–] banadushi@sh.itjust.works -2 points 2 days ago (1 children)

Bullshit, they are prediction engines biased from their input. Saying that no one understands what they do is a gross simplification. Do you understand how encoders work? Great! Then there is a common vocabulay! Now we move on to a probability matrix given those inputs, and we get the out put token witch circles through the input layer.

[–] QuarterSwede@lemmy.world 1 points 2 days ago

gross simplification

That was the point. Not everything needs to be literal.

[–] intensely_human@lemm.ee 1 points 3 days ago

How do you define “understand”?

[–] Epzillon@lemmy.world 1 points 3 days ago (1 children)

I just like the analogy of a dashboard with knobs. Input text on one wide output text on the other. "Training" AI is simply letting the knobs adjust themselves based on feedback of the output. AI never "learns" it only produces output based on how the knobs are dialed in. Its not a magic box, its just a lot of settings converting data to new data.

[–] intensely_human@lemm.ee 3 points 3 days ago

Do you think real “understanding” is a magic process? Why would LLMs have to be “magic” in order to understand things?