this post was submitted on 21 Nov 2023
989 points (97.9% liked)
Technology
59377 readers
2621 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I mean in its current form yeah, but obviously it's going to get really good in the near future
What kind of timeframe do you think, several months, a few years?
Within 10 years for sure
AI seems to have gone through periods of relative stagnation punctuated by leaps forward. Neural networks were the next big thing when I was in college in the late 80s. Then fuzzy logic. Computer vision was limited maybe 30 years ago but has had some surges due to new algorithms and faster processors. Bayesian algorithms (Hidden Markov Models etc) got big fighting spam but helped a lot with speech to text (STT). LLMs are the next big leap forward from that area of research. I think we still have a number of major leaps to go before we have an AGI, though. But if LLMs follow the same progression as text to speech (TTS) or STT, in 10-20 years it will be impressively good.
The tech is here, the problem is risk management. Like, we've had the ability to have self-driving cars for almost a decade. Like catholic priests and pedophilia, they are much less likely to crash compared to the common man. But the assumption is that they never crash, so when they do everyone makes a big deal about it.
Think of all the B.S. documentation reports people have to write that no one reads. LLMs could easily handle those, but do you want to risk it if those reports actually become important?
Heh. People are already getting burnt for using it blindly, you think companies are any different?