Yeah, the fermi paradox really doesn't work here, an AI that was motivated and smart enough to wipe out humanity would be unlikely to just immediately off itself. Most of the doomerism relies on "tile the universe" scenarios, which would be extremely noticeable.
titotal
Apparently there's a new coding AI that is supposedly pretty good. Zvi does the writeup, and logically extrapolates what will happen for future versions, which will obviously self improve and... solve cold fusion?
James: You can just 'feel' the future. Imagine once this starts being applied to advanced research. If we get a GPT5 or GPT6 with a 130-150 IQ equivalent, combined with an agent. You're literally going to ask it to 'solve cold fusion' and walk away for 6 months.
...
Um. I. Uh. I do not think you have thought about the implications of ‘solve cold fusion’ being a thing that one can do at a computer terminal?
Yep. The recursive self improving AI will solve cold fucking fusion from a computer terminal.
Good to see the Yud tradition of ridiculous strawmanning of science continue.
In this case, the strawscientist falls for a ponzi scheme because "it always outputted the same returns". So scientific!
I feel this makes it an unlikely great filter though. Surely some aliens would be less stupid than humanity?
Or they could be on a planet with far less fossil fuels reserves, so they don't have the opportunity to kill themselves.