this post was submitted on 28 Oct 2024
1529 points (98.8% liked)

Technology

59116 readers
2971 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] clutchtwopointzero@lemmy.world 16 points 1 week ago (1 children)

Korean SAT are highly standardized in multiple choice form and there is an immense library of past exams that both test takers and examiners use. I would be more impressed if the LLMs could show also step by step problem work out...

[–] Rogers@lemmy.ml -5 points 1 week ago (1 children)

Claud 3.5 and o1 might be able to do that; if not, they are close to being able to do that. Still better than 99.99% of earthly humans

[–] Tamo240@programming.dev 8 points 1 week ago (1 children)

You seem to be in the camp of believing the hype. See this write up of an apple paper detailing how adding simple statements that should not impact the answer to the question severely disrupts many of the top model's abilities.

In Bloom's taxonomy of the 6 stages of higher level thinking I would say they enter the second stage of 'understanding' only in a small number of contexts, but we give them so much credit because as a society our supposed intelligence tests for people have always been more like memory tests.

[–] clutchtwopointzero@lemmy.world 1 points 1 week ago* (last edited 1 week ago)

Exactly... People are conflating the ability to parrot an answer based on machine-levels of recall (which is frankly impressive) vs the machine actually understanding something and being able to articulate how the machine itself arrived at a conclusion (which, in programming circles, would be similar to a form of "introspection"). LLM is not there yet