this post was submitted on 03 Nov 2023
165 points (90.2% liked)
Technology
59428 readers
3685 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
It did no such thing. It doesn't know what those things are. "LLM AI" is not a conscious thinking being and treating it like it is will end badly. Giving an LLM any responsibility to act on your behalf automatically is a crazy stupid idea at this point in time. There needs to be a lot more testing and learning about how to properly train models for more reliable outcomes.
It's almost impressive how quickly humans seem to accept something as "human" just because it can form coherent sentences.
I don’t know why ppl cannot figure this out. What we are calling “AI” is just a machine putting words together based on what it has “seen” in its training data. It has no intelligence and has no intent. It just groups words together.
It’s like going into a library and asking the librarian to be a doctor. They can tell you what the books in the library say about the subject (and might even make up some things based on what they saw in a few episodes of House), but they cannot actual do the work of a doctor.
It's glorified autocomplete.
It's a stochastic parrot.
Forming coherent sentences puts it above large sections of the population. Eventually they’re going to have to dumb down the speech output, ala Dubya during his presidency. Add to that all the conditioning to trust authoritative sources and this is going to turn into a real problem sooner rather than later. I think one of the first things to come out that will really cause damage is replacing teachers with ai. If all those teachers out there would quit asking to make more money than a 12 year old in a meat packing plant, maybe this wouldn’t happen, but I digress.. (Kudos to all the teachers out there, obviously.)