titotal

joined 1 year ago
[–] titotal@awful.systems 7 points 8 months ago (1 children)

I feel this makes it an unlikely great filter though. Surely some aliens would be less stupid than humanity?

Or they could be on a planet with far less fossil fuels reserves, so they don't have the opportunity to kill themselves.

[–] titotal@awful.systems 8 points 8 months ago

Yeah, the fermi paradox really doesn't work here, an AI that was motivated and smart enough to wipe out humanity would be unlikely to just immediately off itself. Most of the doomerism relies on "tile the universe" scenarios, which would be extremely noticeable.

[–] titotal@awful.systems 10 points 9 months ago (5 children)

Apparently there's a new coding AI that is supposedly pretty good. Zvi does the writeup, and logically extrapolates what will happen for future versions, which will obviously self improve and... solve cold fusion?

James: You can just 'feel' the future. Imagine once this starts being applied to advanced research. If we get a GPT5 or GPT6 with a 130-150 IQ equivalent, combined with an agent. You're literally going to ask it to 'solve cold fusion' and walk away for 6 months.

...

Um. I. Uh. I do not think you have thought about the implications of ‘solve cold fusion’ being a thing that one can do at a computer terminal?

Yep. The recursive self improving AI will solve cold fucking fusion from a computer terminal.

[–] titotal@awful.systems 5 points 9 months ago (1 children)

Good to see the Yud tradition of ridiculous strawmanning of science continue.

In this case, the strawscientist falls for a ponzi scheme because "it always outputted the same returns". So scientific!