this post was submitted on 05 Aug 2023
1 points (100.0% liked)

TechTakes

1490 readers
32 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
 

The problem is that today's state of the art is far too good for low hanging fruit. There isn't a testable definition of GI that GPT-4 fails that a significant chunk of humans wouldn't also fail so you're often left with weird ad-hominins ("Forget what it can do and results you see. It's "just" predicting the next token so it means nothing") or imaginary distinctions built on vague and ill defined assertions ( "It sure looks like reasoning but i swear it isn't real reasoning. What does "real reasoning" even mean ? Well idk but just trust me bro")

a bunch of posts on the orange site (including one in the linked thread with a bunch of mask-off slurs in it) are just this: techfash failing to make a convincing argument that GPT is smart, and whenever it’s proven it isn’t, it’s actually that “a significant chunk of people” would make the same mistake, not the LLM they’ve bullshitted themselves into thinking is intelligent. it’s kind of amazing how often this pattern repeats in the linked thread: GPT’s perceived successes are puffed up to the highest extent possible, and its many(, many, many) failings are automatically dismissed as something that only makes the model more human (even when the resulting output is unmistakably LLM bullshit)

This is quite unfair. The AI doesn't have I/O other than what we force-feed it through an API. Who knows what will happen if we plug it into a body with senses, limbs, and reproductive capabilities? No doubt somebody is already building an MMORPG with human and AI characters to explore exactly this while we wait for cyborg part manufacturing to catch up.

drink! “what if we gave the chatbot a robot body” is my favorite promptfan cliche by far, and this one has it all! virtual reality, cyborgs, robot fucking, all my dumbass transhumanist favorites

There's actually a cargo cult around downplaying AI.

The high level characteristics of this AI is something we currently cannot understand.

The lack of objectivity, creativity, imagination, and outright denial you see on HN around this topic is staggering.

no, you’re all the cargo cult! I asked my cargo and it told me so

top 3 comments
sorted by: hot top controversial new old
[–] dgerard@awful.systems 1 points 1 year ago

“a significant chunk of people” would make the same mistake

the same was literally true for ELIZA in 1964

[–] gerikson@awful.systems 0 points 1 year ago (1 children)

I've got the ACM piece in a tab, staring at me, challenging me not to nope out with a TL;DR. Is it worth getting into it? I'd love to have some ammo against promptfans of all stripes.

[–] raktheundead@fedia.io 1 points 1 year ago

Basically: AI is (potentially?) useful, but LLMs require substantially more data than a human brain to do what they do, which is limited at best - and often less able for generalised cases than a well-defined physics model. The ideas aren't even new, having their roots in theoretical approaches from the 1940s and applied approaches from the 1980s, but they have a lot more training data and processing power now, which makes it seem more impressive. Even if all of the data in the universe was present, this would not lead to AGI because LLMs can't figure out the "why".

But I don't think there's anything new asserted in that article if you're familiar with the space and the promptfans will dismiss it anyway.