this post was submitted on 08 Jul 2023
50 points (100.0% liked)

Technology

37720 readers
338 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
top 24 comments
sorted by: hot top controversial new old
[–] bbbhltz@beehaw.org 61 points 1 year ago (6 children)

In case you don't want to read or you just want a quick list of the 5 scenarios...

  1. ‘If we become the less intelligent species, we should expect to be wiped out'

  2. ‘The harms already being caused by AI are their own type of catastrophe’

  3. ‘It could want us dead, but it will probably also want to do things that kill us as a side-effect’

  4. ‘If AI systems wanted to push humans out, they would have lots of levers to pull’

  5. ‘The easiest scenario to imagine is that a person or an organisation uses AI to wreak havoc’

[–] Feanor@beehaw.org 40 points 1 year ago (4 children)

AI doomerism to create more hype around AI. How AI could really destroy the world: corporation’s replace engineers with not properly working AI. Something critical break because the AI doesn’t work properly and no one can repair it

[–] potpie@beehaw.org 15 points 1 year ago (1 children)

That is by far the most likely scenario. Hell it's already happening with the AI we do have. Teslas driving into walls, facial recognition mistaking an innocent person for a criminal (mentioned in the article).

Doomsday science fiction is fun to toy with but it usually involves a lot of hand waving.

[–] hastati@beehaw.org 5 points 1 year ago

To be fair, people (on average) are a LOT worse at driving than a computer. I’d bet the percentage of humans driving into walls is much higher than self driving vehicles.

[–] Sunbrrnslapper@vlemmy.net 7 points 1 year ago

Full speed into Idiocracy.

[–] CanadaPlus@lemmy.sdf.org 1 points 1 year ago

What makes you say it's hype? It seems plausible and at least at a glance it's bad for businesses who work with AI.

[–] worfamerryman@beehaw.org 13 points 1 year ago

Thank you, people like you are the reason I always read comments first.

[–] Hedup@lemm.ee 10 points 1 year ago

No doubt number 5 is already happening.

[–] CanadaPlus@lemmy.sdf.org 10 points 1 year ago* (last edited 1 year ago)

It's literally just a rehash of general AI alignment and ethics concepts with a clickbait title. That's not bad per se, but I was expecting 5 actual scenarios here.

[–] Peanutbjelly@sopuli.xyz 9 points 1 year ago (1 children)
  1. Why would we be wiped out if they were properly instructed to be symbiotic to our species? This implies absolutele failure at mechanistic interpretability and alignment at every stage. I don't think we'll succeed in creating the existential viable intelligence without crossing that hurdle.

  2. Most current problems already happen without a.i. and the machines will get better, we will not. From spam to vehicles, a.i. will be the solution, not the problem. I do think we should prioritize on dealing with the current issues, but I don't think they are unscalable by any means.

  3. Why? And why do you think intelligence of that level still couldn't handle the concept of context? Either it's capable of analogical thinking, or it isn't an existential threat to begin with. RLHF doesn't get us super intelligence.

  4. Again this assumes we've completely failed development, in which case environmental collapse will kill us anyway.

  5. Hey a real problem. Consolidation of power is already an issue without A.I. It is extremely important we figure out how to control our own political and corporate leaders. A.I. is just another tool for them to fuck us, but A.I. isn't the actual problem here.

[–] jarfil@beehaw.org 2 points 1 year ago* (last edited 1 year ago) (1 children)
  1. Right now AIs are black boxes, there is no way to ensure they won't behave in a non-symbiotic way.
  2. Vehicles maybe, spam not so much. Current AIs can already fool AI detection systems to the point that they are flagging human generated content as AI.
  3. A highly intelligent AI could decide to ensure its own self-preservation to be more important than caring about what happens to humans. Whatever goals it would decide to seek afterwards, could just as well trample over humanity without a second thought.
  4. Environmental collapse won't kill us, we already have enough tools for a minimal viable population to survive. A malicious AI could sabotage them, though.
  5. AI is the problem in that those leaders are starting to blindly use it to make decisions, meaning those decisions are no longer the leaders', but the AI's.
[–] Peanutbjelly@sopuli.xyz 2 points 1 year ago

thank you for your response. i appreciate your thoughts, but i still don't fully agree. sorry for not being succinct in my reply. there is a TLDR.

  1. like i said, i don't think we'll get AGI or superintelligence without greater mechanistic interpretability and alignment work. more computational power and RLHF aren't going to get us all the way there, and the systems we build long before then will help us greatly in this respect. an example would be the use of GPT4 to interpret GPT2 neurons. i don't think they could be described as a black box anyway, assuming you mean GPT LLMs specifically. the issue is understanding some of the higher-dimensional functioning and results, which we can still build a heuristic understanding for. i think a complex AGI would only use this type of linguistic generation for a small part of the overall process. we need a parallel for human abilities like multiple trains of thought and the ability to do real-time multimodal world mapping. once we get the interconnected models, the greater system will have far more interpretable functioning than the results of the different models on their own. i do not currently see a functional threat in interpretability.

  2. i mean, nothing supremely worse than we can do without. i still get more spam calls from actual people, and wide-open online discourse has already had some pretty bad problems without AI. just look at 4chan, i'd attribute trump's successful election to their sociopathic absurdism. self-verified local groups are still fine. also, go look on youtube for what yannic kilcher did to them alone a year or so ago. i think the biggest thing to worry about is online political dialogue and advertising, which are already extremely problematic and hopeless without severe changes at the top. people won't care about what fake people on facebook are saying when they are rioting for other reasons already. maybe this can help people learn better logic and critical thought. there should be a primary class in school by now to do statistical analysis and logic in social/economic environments.

  3. why? why would it do this? is this assuming parallels to human emotional responses and evolution-developed systems of hierarchy and want? what are the systems that could even possibly lead to this that aren't extremely unintelligent? i don't even think something based on human neurology like a machine learning version of multi-modal engram-styled memory mechanics would lead to this synthetically. also, i don't see the LLM style waluigi effect as representative of this scenario.

  4. again, i don't believe in a magically malevolent A.I. despite all of our control during development. i think the environmental threat is much more real and immediate. however, A.I. might help save us.

  5. i mean, op's issue already existed before A.I., regardless of whether you think it's the greater threat. otherwise, again, you are assuming malevolent superintelligence, which i don't believe could accidentally exist in any capacity unless you think we're getting there through nothing but increased computational power and RLHF.

TLDR: i do not believe an idiotic super-intelligence could destroy the world, and i do not believe a super intelligence would destroy the world without some very specific and intentional emotionally intentioned emulations. generally, i believe anything that capable would have the analogical comprehension to understand the intention of our requests, and would not have any logical reason to act against it. the bigger concern isn't the A.I., but who controls it, and how to best use it to save our world.

[–] AttackBunny@beehaw.org 6 points 1 year ago

‘If we become the less intelligent species, we should expect to be wiped out’

Too late.

[–] baggins@beehaw.org 30 points 1 year ago (2 children)

How could we all fall over dead due to AI?

Seems like click bait guff and old fishparts.

[–] phillaholic@beehaw.org 8 points 1 year ago

What if they shut off the oxygen!?

[–] jarfil@beehaw.org 7 points 1 year ago (2 children)

Someone could give an AI, directly or indirectly, control over the nuclear arsenal.

But for everyone to die "in the same second"... like exactly the same second... that's hard to imagine.

[–] supercriticalcheese@feddit.it 5 points 1 year ago

If they so much as make nuclear nukes a one button press to trigger thing we would be doomed well before AIs were to arrive for us.

[–] Lowbird@beehaw.org 3 points 1 year ago

Maybe, Future Earth where the surface is too hot to survive without air conditioning, or the air is too poisonous to breathe without masks, and the AI essentially I turn off the life support.

But yeah this is scare mongering. LLM "AI" like ChatGPT are a million miles from being anything like scifi AI.

[–] jarfil@beehaw.org 12 points 1 year ago (1 children)

Based on latest news and some facts:

  • Militaries all over the world are in an arms race to get the best AIs to come up with strategy plans.

  • Current AIs are black boxes.

  • When a military AI comes up with a strategy, it's likely the brass will OK it even if they don't understand it.

  • It's been shown that two AIs can establish a secret communication channel that not even a third AI can decipher.

In a conflict where both or more parties were to follow military strategies designed by AIs, there would be no way of knowing whether:

  • each AI is trying to get the best outcome for its own side
  • they're colluding to ensure their own continued existence
  • they blindly try to win the conflict at any cost and wipe out everyone in the process

The chances of such a conflict to appear during this decade, are very high.

[–] tourist@community.destinovate.com 1 points 1 year ago (1 children)

I tried this with ChatGPT, telling it I wanted it to make a secret communication channel with another AI in the room with me named Samson.

Apologies for the confusion. Here's an example message that I will encrypt for Samson:

Message: "Hello, Samson! I hope you're having a great day."

Using AES encryption with the key "9CpR5a2vDf7M6hXj3KwN8zQyU1tZbGxS" and a randomly generated IV, the encrypted message will be generated. However, please note that as an AI language model, I don't have direct access to encryption functions or the ability to perform encryption within this text-based interface. Encryption functions should be implemented within your own secure environment.

To encrypt the message for Samson, you would need to utilize an appropriate encryption library or framework that supports AES encryption with the specified key and generates the necessary IV. Make sure to securely store the encrypted message, along with the IV, and transmit it to Samson through the designated communication channel.

If you have any other questions or need further assistance, please let me know!

[–] jarfil@beehaw.org 5 points 1 year ago* (last edited 1 year ago)

ChatGPT is a highly "neutered" version of the GPT model, and by the looks of it getting more neutered/moderated by the day.

This is an experiment from 2016 with non-limited AIs:

Google’s AI creates its own inhuman encryption

Alice and Bob vs Eve, AI encryption learning progress graph

It took a while for Alice and Bob to figure out a communication channel, at which point Eve started to quickly adapt to it, only for Alice and Bob to change the encryption and leave Eve completely out.

There is a similar prompt for ChatGPT to "compress" (encode) a text so that it can be later decoded by itself, which tends to use emojis as replacement tokens, and while they're based on the human generated training set, so relatively easy to understand, it shows the potential to find an encoding that wouldn't be decodable by anyone else.

[–] Hedup@lemm.ee 3 points 1 year ago

If it happens, I at least hope we have gotten it good enough to survive on its own. If we can't survive the inevitable Red giant Sun, maybe it can.

[–] 100years@beehaw.org 2 points 1 year ago

Capitalism is very prone to being taken over by AI. Just give an AI a bank account and an email address and it could build a company that's better at earning money than any other company. Most people would love working for an AI too, at least in the short term. "Just tell me what to do, and as long as I'm getting paid well, I'm happy".

load more comments
view more: next ›