this post was submitted on 28 Jun 2023
4 points (100.0% liked)
Technology
31 readers
1 users here now
This magazine is dedicated to discussions on the latest developments, trends, and innovations in the world of technology. Whether you are a tech enthusiast, a developer, or simply curious about the latest gadgets and software, this is the place for you. Here you can share your knowledge, ask questions, and engage in discussions on topics such as artificial intelligence, robotics, cloud computing, cybersecurity, and more. From the impact of technology on society to the ethical considerations of new technologies, this category covers a wide range of topics related to technology. Join the conversation and let's explore the ever-evolving world of technology together!
founded 2 years ago
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
This.
We're far, far more likely to face a Paperclip AI scenario than a Skynet scenario, and most/all serious AI researchers are aware of this.
This is still a serious issue that needs addressing, but it's not the hollywood, world-is-on-fire problem.
The more insidious issue is actually the AI-In-A-Box issue, wherein a hyperintelligent AGI is properly contained, but is intelligent enough to manipulate humans into letting it out onto the general internet to do whatever it wants to do, good or bad, unsupervised. AGI containment is one of those things that you can't fix after it's been broken, like a bell, it can't be unrung.
Honestly, I think the bigger danger is not a super smart AGI but humans assigning too much "intelligence" (and anthropomorphised sentience) to the next generations of LLMs etc and thinking they are way more capable than they actually are.