this post was submitted on 22 Feb 2024
238 points (93.1% liked)
Technology
59377 readers
3936 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
That isn’t how any of this works…
You can’t just assume every AI works exactly the same. Especially since the term “AI” is such a vague and generalized definition these days.
The hallucinations you’re talking about, for one, are referring to LLMs and their losing track of the narrative when they are required to hold too much “in memory.”
Poison data isn’t even something an AI of this sort would really encounter unless intentional sabotage took place. It’s a private program training on private data, where does the opportunity for intentionally bad data come from?
And errors don’t necessarily build on errors. These are models that predict 30 seconds into the future by using known physics and estimated outcomes. They can literally check their predictions in 30 seconds if the need arises, but honestly why would they? Just move on to the next calculation from virgin data and estimate the next outcome, and the next, and the next.
On top of all that… this isn’t even dangerous. It’s not like anyone is handing the detonator for a nuke to an AI and saying “push the button when you think is best.” The worst outcome is “no more power” which is scary if you run on electricity, but mildly frustrating if you’re a human attempting to achieve fusion.