this post was submitted on 22 Feb 2024
238 points (93.1% liked)
Technology
59377 readers
4059 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Forgive me if I think any kind of nuclear reaction should not be handled by what we’re calling “AI.” It could hallucinate that it’s winning a game of chess by causing a nuclear blast.
Whatever you read to convince you this is what an AI hallucination is needs a better editing pass
Error builds upon error. It’s cursed from the start. When you factor in poisoned data, it never had a chance.
It’s not here yet because we aren’t advanced enough to make it happen. Dress it up in whatever way the owner class can swallow. That’s the truth. Dead on arrival
It seems like you are building on criticisms of LLMs and applying them to something that very different. What poisoned data do you imagine this model having in the future?
That is a criticism of LLMs because new generations are being trained on writing that could be the output of LLMs, which can degrade the model. What suggests to you that this fusion reactor will be using synthetic fusion reactor data to learn when to stop itself?