this post was submitted on 23 Jan 2025
241 points (96.9% liked)
Technology
61081 readers
2471 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I'm sorry, you fucking what? How about you test the world's population in PhD level history and see if you get a 46%? Are you fucking kidding me? You're telling me this machine is half accurate on PhD history and you're tryna act like that doesn't just make your entire history department fucking useless? At most, you have 5 years until it's better at the job than actual humans trained for it, because it's already better than the public at large.
50% is decent, if it had any idea of when it actually was correct or not. But 50% is not very good, when the 50% that’s faulty, results in it going of on a long tangent spewing lies. Lies that are incredibly real looking, takes immense knowledge or huge amounts of time to check.
If you’re well versed enough in the subject to spot the lies, you likely wont get much help from AI. And if you aren’t, well, you’re going to be learning a lot of incorrect information. Or spend ridiculous amount of times fact checking.
Works a bit like that for software developing at the moment. AI is incredibly at spewing out code quickly. But the time won by copying it, is lost looking for errors that are extremely well hidden.
For it to be a totally fair test you'd be testing the worlds population in an open book exam since the model likely has every history book they could find in its training data.
Well, that's simply not true. The llm is simply trained on patterns. Human history doesn't really have clear rules such like programming languages, so it's not going to be able to internalise that very well. But the English language does have patterns so If you used a Semantic or hybrid Search over a corpus of content and then used an LLM to synthesise well structured summaries and responses, it would probably be fairly usable.
The big challenge that we're facing with media today is that many authors do not have any understanding of statistics, programming or data science/ ML.
Lllm is not ai, It's simply an application of an NN over a large data set that works really well. So well, in fact that the runtime penalty is outweighed by its utility.
I would have killed for these a decade ago and they're an absolute game changer With a lot of potential to do a lot of good. Unfortunately the uninitiated among us have elected to treat them like a silver bullet because they think it's the next dot com bubble