this post was submitted on 11 Dec 2023
524 points (87.2% liked)

Technology

59377 readers
3815 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] kromem@lemmy.world 19 points 11 months ago* (last edited 11 months ago)

It's so much worse for Musk than just regression to the mean for political perspectives on training data.

GPT-4 level LLMs have very complex mechanisms for how they arrive at results which allows them to do so well on various tests of critical thinking, reasoning, knowledge, etc.

Those tests are the key benchmark being used to measure relative LLM performance right now.

The problem isn't just that conservatism is less prominent in the training data. It's that it's correlated with stupid.

If you want a LLM that thinks humans and dinosaurs hung out together, that magic is real, that aliens built the pyramids, that it is wise to discriminate against other races or genders rather than focus on collaborative advancement, etc - then you can end up with an AI aligned to and trained on conservatism but it sure as hell isn't going to be impressing anyone with its scores.

If instead you try to optimize its scores to actually impress people in tech about your model, then you are going to need to train it on higher education content, which is going to reflect more progressive ideals.

There's no path to a well performing LLM that echoes conservative talking points, because those talking points are more closely correlated with stupidity than intelligence.

Even something like gender -- Musk's perspective is one reflecting very binary thinking vs nuanced consideration. Is a LLM that focuses more on binary thinking over nuances going to be more or less performant at critical thinking tasks than one that is focused on nuances and sees topics as a spectrum rather than black or white?

It's fucking hilarious. I've been laughing about this for nearly a year knowing this was the inevitable result.

I suspect he's going to create a model that his userbase likes what it spits out, but watch as he doesn't release its scores on the standardized tests. And it will remain a novelty pandering to his panderers while the rest of the industry eclipses his offering with 'woke' products that are actually smart.