Its kind of funny, because right now, GPT4 doesn't even achieve GPT4 levels of performance.
Its a real issue with not having access to the underlying models/ how they were trained. We know they've repeatedly nerfed/ broken this model.
This community serves to share top posts on Hacker News with the wider fediverse.
Rules
0. Keep it legal
Its kind of funny, because right now, GPT4 doesn't even achieve GPT4 levels of performance.
Its a real issue with not having access to the underlying models/ how they were trained. We know they've repeatedly nerfed/ broken this model.
I feel like a broken record, but...
Seriously, the current large "language" models - or should I say, large syntax models? - are a technological dead end. They might find a lot of applications, but they certainly will not evolve to the "superhuman capabilities" from the tech bros' wet dreams.
In the best hypothesis, all that the self-instruction will do is to play whack-a-mole with hallucinations. In the worst it'll degenerate.
You'll need a different architecture to go meaningfully past that. Probably one that doesn't handle semantics as an afterthought, but instead as its own layer, a central and big one.