Hilarious.
Only five years ago no one in the computer science industry would have taken a bet that AI would be able to explain why a joke was funny or perform creative tasks.
Today that's become so normalized that people are calling things thought to be literally impossible a speculative bubble because advancement that surprised everyone in the industry initially and then again with the next model a year later hasn't moved fast enough?
The industry is still learning how to even use the tech.
This is like TV being invented in 1927 and then people in 1930 saying that it's a bubble because it hasn't grown as fast as they expected it to.
Did OP consider the work going on at literally every single tech college's VC groups in optoelectronic neural networks and how that's going to impact decoupling AI training and operation from Moore's Law? I'm guessing no.
Near-perfect analysis, eh? By someone who read and regurgitated analysis by a journalist who writes for a living and may just have an inherent bias towards evaluating information on the future prospects of a technology positioned to replace writers?
We haven't even had a public release of multimodal models yet.
This is about as near perfect of an analysis as smearing paint on oneself and rolling down a canvas on a hill.