this post was submitted on 23 Jan 2025
241 points (96.9% liked)

Technology

61081 readers
2596 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

LLMs performed best on questions related to legal systems and social complexity, but they struggled significantly with topics such as discrimination and social mobility.

“The main takeaway from this study is that LLMs, while impressive, still lack the depth of understanding required for advanced history,” said del Rio-Chanona. “They’re great for basic facts, but when it comes to more nuanced, PhD-level historical inquiry, they’re not yet up to the task.”

Among the tested models, GPT-4 Turbo ranked highest with 46% accuracy, while Llama-3.1-8B scored the lowest at 33.6%.

top 50 comments
sorted by: hot top controversial new old
[–] rdsm@discuss.tchncs.de 6 points 1 day ago (1 children)

“Among the tested models, GPT-4 Turbo ranked highest with 46% accuracy, while Llama-3.1-8B scored the lowest at 33.6%.“

Have they tested actual SOTA models?

[–] Hawk@lemmynsfw.com 1 points 1 day ago

I don't think I would have made too much of a difference because the state-of-the-art models still aren't a database.

Maybe more recent models could store more information in a smaller number of parameters, but it's probably going to come down to the size of the model.

The Only exception there is if there is indeed some pattern in modern history that the model is able to learn, but I really doubt that.

What this article really calls to light is that people tend to use these models for things that they're not good at because it's being marketed contrary to what it is.

[–] Naia@lemmy.blahaj.zone 2 points 1 day ago (1 children)

Most people don't understand history. Anything trained on that is goanna struggle too.

[–] ripripripriprip@lemmy.world 1 points 1 day ago

Humans and LLMs learn in fundamentally different ways, though.

The nuance bit is really interesting, since I feel that nuance arises from these fundamental differences.

[–] QuarterSwede@lemmy.world 90 points 3 days ago* (last edited 3 days ago) (6 children)

Ugh. No one in the mainstream understands WHAT LLMs are and do. They’re really just basic input output mechanisms. They don’t understand anything. Garbage in, garbage out as it were.

[–] drosophila@lemmy.blahaj.zone 16 points 2 days ago* (last edited 2 days ago)

Specifically they are completely incapable of unifying information into a self consistent model.

To use an analogy you see a shadow and know its being cast by some object with a definite shape, even if you can't be sure what that shape is. An LLM sees a shadow and its idea of what's casting it is as fuzzy and mutable as the shadow itself.

Funnily enough old school AI from the 70s, like logic engines, possessed a super-human ability for logical self consistancy. A human can hold contradictory beliefs without realizing it, a logic engine is incapable of self-contradiction once all of the facts in its database have been collated. (This is where the SciFi idea of robots like HAL-9000 and Data from Star Trek come from.) However this perfect reasoning ability left logic engines completely unable to deal with contradictory or ambiguous information, as well as logical paradoxes. They were also severely limited by the fact that practically everything they knew had to be explicitly programmed into them. So if you wanted one to be able to hold a conversion in plain English you would have to enter all kinds of information that we know implicitly, like the fact that water makes things wet or that most, but not all, people have two legs. A basically impossible task.

With the rise of machine learning and large artificial neural networks we solved the problem of dealing with implicit, ambiguous, and paradoxical information but in the process completely removed the ability to logically reason.

[–] UnderpantsWeevil@lemmy.world 35 points 3 days ago (1 children)

They’re really just basic input output mechanisms.

I mean, I'd argue they're highly complex I/O mechanisms, which is how you get weird hallucinations that developers can't easily explain.

But expecting cognition out of a graph is like demanding novelty out of a plinko machine. Not only do you get out what you get in, but you get a very statistically well-determined output. That's the whole point. The LLM isn't supposed to be doing high level cognitive extrapolations. It's supposed to be doing statistical aggregates on word association using a natural language schema.

[–] lennivelkant@discuss.tchncs.de 12 points 2 days ago

Hallucinations imply a sense of "normal" or "reasonable" or at least "real" in the first place. LLMs have no concept of that.

I prefer to phrase it as "you get made-up results that are less convincingly made-up than the test"

[–] spankmonkey@lemmy.world 7 points 3 days ago (1 children)

That is accurate, but people who design and distribute the LLMs refer to the process as machine learning and use terms like hallucinations which is the primary cause of the confusion.

[–] SinningStromgald@lemmy.world 7 points 3 days ago (1 children)

I think the problem is the use of the term AI. Regular Joe Schmo hears/sees AI and thinks Data from ST:NG or Cylons from Battlestar Galactica and not glorified search engine chatbots. But AI sounds cooler than LLM so they use AI.

[–] Grimy@lemmy.world 3 points 3 days ago

The term is fine. Your examples are very selective. I doubt Joe Schmo thought the aimbots in CoD were truly intelligent when he referred to them as AI.

[–] banadushi@sh.itjust.works -2 points 1 day ago (1 children)

Bullshit, they are prediction engines biased from their input. Saying that no one understands what they do is a gross simplification. Do you understand how encoders work? Great! Then there is a common vocabulay! Now we move on to a probability matrix given those inputs, and we get the out put token witch circles through the input layer.

[–] QuarterSwede@lemmy.world 1 points 1 day ago

gross simplification

That was the point. Not everything needs to be literal.

[–] intensely_human@lemm.ee 1 points 2 days ago

How do you define “understand”?

[–] Epzillon@lemmy.world 1 points 2 days ago (1 children)

I just like the analogy of a dashboard with knobs. Input text on one wide output text on the other. "Training" AI is simply letting the knobs adjust themselves based on feedback of the output. AI never "learns" it only produces output based on how the knobs are dialed in. Its not a magic box, its just a lot of settings converting data to new data.

[–] intensely_human@lemm.ee 3 points 2 days ago

Do you think real “understanding” is a magic process? Why would LLMs have to be “magic” in order to understand things?

[–] aesthelete@lemmy.world 11 points 2 days ago* (last edited 2 days ago)

No wonder the broligarchs love it so much.

[–] Womble@lemmy.world 2 points 1 day ago (1 children)

It would be interesting to give these scores a bit of context: what level would a random person off the street, a history undergrad and a history professor score?

[–] Hawk@lemmynsfw.com 4 points 1 day ago

I think they all would have performed significantly better with a degree of context.

Trying to use a large language model like a database is simply A misapplication of the technology.

The real question is if you gave a human an entire library of history. Would they be able to identify relevant paragraphs based on a paragraph that only contains semantic information? The answer is probably not. This is the way that we need to be using these things.

Unfortunately companies like openai really want this to be the next Google because there's so much money to be hired by selling this is a product to businesses who don't care to roll more efficient solutions.

[–] Jubei_K_08@lemmy.world 11 points 2 days ago

I mean, it's probably better that it doesn't understand it and then trying to end us like Ultron 😅

[–] werefreeatlast@lemmy.world 9 points 2 days ago

Plus some of it is made up and or adjusted to make the rich and asshole sound like heroes.

[–] Pockybum522@lemmy.zip 13 points 2 days ago (1 children)
[–] Boomkop3@reddthat.com 3 points 2 days ago

Aren't you glad education is cheap or even free in some places?

[–] ininewcrow@lemmy.ca 28 points 3 days ago (1 children)

This isn't new and noteworthy .... because we humans don't understand human history and fail miserably to understand or remember past failings in every generation.

[–] Etterra 7 points 2 days ago (1 children)

Of course they lack understanding. It's right there in the name, large language model. It's complete garbage at anything other than sounding human. It doesn't actually understand anything it's saying, know how to research information, verify sources, etc. It's morning more than a robot parrot.

[–] blubfisch@discuss.tchncs.de 4 points 2 days ago

Came here to say this. The headline should read "LLMs fail to generate convincing output output on certain topics."

[–] reksas@sopuli.xyz 6 points 2 days ago

ai cant understand anything, that is why they will lie wildly if they dont know something. It can't know that it doesn't know. It would likely require somekind of consciousness for it to actually understand something. Expecting anything more than what it currently does well is just stupid until it proves it can do more. Yet people keep being surprised, but that isnt anything new.

[–] AbouBenAdhem@lemmy.world 16 points 3 days ago

For over a decade, complexity scientist Peter Turchin and his collaborators have worked to compile an unparalleled database of human history – the Seshat Global History Databank. Recently, Turchin and computer scientist Maria del Rio-Chanona turned their attention to artificial intelligence (AI) chatbots, questioning whether these advanced models could aid historians and archaeologists in interpreting the past.

Peter Turchin and his collaborators don’t have a great record of understanding human history themselves—their basic shtick has been to try to validate an Enlightenment-era, linear view of human history with statistics from their less-than-rigorous database, with less-than-impressive results. I wouldn’t necessarily expect an AI to outperform them, but I wouldn’t trust their evaluation of it, either.

[–] aramis87@fedia.io 12 points 3 days ago (1 children)

I was trying to solve a betweenle a couple weeks ago, had it down to the words immediately before and after the word-for-the-day, and couldn't think of it. I went to three different AI engines and asked them what word was between those two, alphabetically. All three engines repeatedly gave me "answers" that did not occur between the two words. Like, I'd ask "what 5-letter English words are alphabetically between amber and amble, and they'd suggest aisle or armor. None of them understood 'alphabetically'.

[–] anindefinitearticle@sh.itjust.works 10 points 3 days ago (1 children)

Try asking one to write a sentence that ends with the letter "r", or a poem that rhymes.

They know words as black boxen with weights attached for how likely they are to appear in certain contexts. Prediction happens by comparing the chain of these boxes leading up to the current cursor and using weights and statistics to fill in the next box.

They don't understand that those words are made of letters unless they have been programmed to break each word down into its component letters/syllables. None of them have been programmed to do this because that increases the already astronomical compute and training costs.

About a decade ago I played with an LLM whose markov chain did predictions based on what letter came next instead of what word came next (pretty easy modification of the base code). It was surprisingly comparably good at putting sentences and grammar together when working at the letter-scale. It also was horribly less efficient to train (which is saying something in comparison to word-level prediction LLMs) because it needs to consider many more units (letters vs words) leading up to the current one to maintain the same coherence. If the markov chain was looking at the past 10 words, a word-level prediction has 10 boxes to factor into its calculations and trainings. If those words have an average of 5 letters, then letter-level prediction needs to consider at least 50 boxes to maintain the same awareness of context within a sentence/paragraph. This is a five-fold increase in memory footprint, and an even greater increase in compute time (since most operations are at least of linear order and sometimes more).

That efficiency hit would allow for LLMs to understand sub-word concepts like alphabetization, rhyming, root words, etc. The expense and energy requirements aren't worth this modest expansion of understanding.

Adding a General Purpose Transformer just adds some plasticity to those weights and statistics beyond the markov chain example I use above.

[–] Leg@sh.itjust.works 3 points 2 days ago

I just tried chatgpt with all of these failure points--a 5-letter word that fits alphabetically between amber and amble (ambit), a sentence that ends in the letter R (it ended with "door"), and a poem that rhymes (aabbccbb). These things appear to be getting ironed out quite quickly. I don't think it's much longer before we'll have to make some serious concessions.

[–] Grimy@lemmy.world 8 points 3 days ago* (last edited 3 days ago)

LLMs demonstrated greater accuracy when addressing questions about ancient history, particularly between 8,000 BCE and 3,000 BCE, but struggled with more recent events, especially from 1,500 CE to the present.

I'm not entirely surprised by this. Llms are trained on the whole internet and not just the good part. There are groups online that are very vocal about things like the confederates being in the right for example. It would make sense to assume this essentially poisons the datasets. Realistically, no one is contesting history before that time.

Not that it isn't a problem and doesn't need fixing, just that it makes "sense".

[–] Skates@feddit.nl -3 points 1 day ago (2 children)

Among the tested models, GPT-4 Turbo ranked highest with 46% accuracy, while Llama-3.1-8B scored the lowest at 33.6%.

“The main takeaway from this study is that LLMs, while impressive, still lack the depth of understanding required for advanced history,” said del Rio-Chanona. “They’re great for basic facts, but when it comes to more nuanced, PhD-level historical inquiry, they’re not yet up to the task.”

I'm sorry, you fucking what? How about you test the world's population in PhD level history and see if you get a 46%? Are you fucking kidding me? You're telling me this machine is half accurate on PhD history and you're tryna act like that doesn't just make your entire history department fucking useless? At most, you have 5 years until it's better at the job than actual humans trained for it, because it's already better than the public at large.

[–] ribboo@lemm.ee 4 points 1 day ago

50% is decent, if it had any idea of when it actually was correct or not. But 50% is not very good, when the 50% that’s faulty, results in it going of on a long tangent spewing lies. Lies that are incredibly real looking, takes immense knowledge or huge amounts of time to check.

If you’re well versed enough in the subject to spot the lies, you likely wont get much help from AI. And if you aren’t, well, you’re going to be learning a lot of incorrect information. Or spend ridiculous amount of times fact checking.

Works a bit like that for software developing at the moment. AI is incredibly at spewing out code quickly. But the time won by copying it, is lost looking for errors that are extremely well hidden.

[–] CheeseNoodle@lemmy.world 2 points 1 day ago (1 children)

For it to be a totally fair test you'd be testing the worlds population in an open book exam since the model likely has every history book they could find in its training data.

[–] Hawk@lemmynsfw.com 3 points 1 day ago

Well, that's simply not true. The llm is simply trained on patterns. Human history doesn't really have clear rules such like programming languages, so it's not going to be able to internalise that very well. But the English language does have patterns so If you used a Semantic or hybrid Search over a corpus of content and then used an LLM to synthesise well structured summaries and responses, it would probably be fairly usable.

The big challenge that we're facing with media today is that many authors do not have any understanding of statistics, programming or data science/ ML.

Lllm is not ai, It's simply an application of an NN over a large data set that works really well. So well, in fact that the runtime penalty is outweighed by its utility.

I would have killed for these a decade ago and they're an absolute game changer With a lot of potential to do a lot of good. Unfortunately the uninitiated among us have elected to treat them like a silver bullet because they think it's the next dot com bubble

[–] JeeBaiChow@lemmy.world 3 points 2 days ago (1 children)

Really wondering about the point of those PhDs the llms claim to have 'passed'.

[–] Boomkop3@reddthat.com 3 points 2 days ago* (last edited 2 days ago) (2 children)

Make sure to read a bit further, they usually get about 10000 attempts. And unfortunately most tests are just about recalling stuff, understanding is not something a text predictor does. It can't actually think.

[–] JeeBaiChow@lemmy.world 1 points 1 day ago

This. Everything about every 'ai solves' article I've read basically boils down to the equivalent of monkeys on typewriters, only with statistical guidance. Millions of iterations to solve what is essentially an intuitive solution for a reasonably intelligent being. I passed my driving test on the first try, after maybe a couple of weeks in school. What current ai ai doing is not intelligent in the least, and hardly efficient.

[–] intensely_human@lemm.ee 1 points 2 days ago (1 children)

A friend’s mother was a doctor. Long ago, back in the 90s, she was talking about how there was some medical certification test that non-English speakers were passing simply by noticing key words in the question and correct answer.

[–] Boomkop3@reddthat.com 1 points 2 days ago

About as useful as driver's licenses in some places

[–] A_A@lemmy.world 1 points 3 days ago (1 children)

Suppose A and B are at war and based on every insults they throw at each other, you train an LLM to explain what's going on. Well, it will be quite bad. Maybe this is some part of the explanation.

[–] FlyingSquid@lemmy.world 4 points 3 days ago (1 children)

But that's exactly the problem. Humans with degrees in history can figure out what is an insult and what is a statement of fact a hell of a lot better than an LLM.

[–] A_A@lemmy.world -2 points 3 days ago

it took maybe thousands or even millions of years for nature to create animals that understand who they are and what's going on around them. Give those machines a few more years, they are not all LLMs and they are advancing quite rapidly.
Finally, i completely agree with you that, for the time being, they are very bad at playing historian.

[–] ArchmageAzor@lemmy.world -1 points 2 days ago (1 children)

Why is it that whenever one of these tests are performed on AIs they only test LLMs? It's like forcing an English major to take a history test.

Do it again with an AI trained on historical data.

[–] jacksilver@lemmy.world 4 points 2 days ago

LLMs are general purpose models trained on text. The thougbt is they should be able to address anything that can be represented in a textual format.

While you could focus the model by only providing specific types of text, the general notion is they should be able to handle tasks ranging across different domains/disciplines.

[–] CircuitGuy@lemmy.world 0 points 2 days ago (1 children)

I read some of it, but I find it funny because it should be a joke for the bar to be so ridiculously high for a new technology: understanding human history.

[–] 0xD@infosec.pub 1 points 2 days ago

Well it has read all we know about it.