this post was submitted on 12 Sep 2023
150 points (100.0% liked)

Technology

37720 readers
246 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] Veraticus@lib.lgbt 1 points 1 year ago (3 children)

If you truly believe humans are simply autocompletion engines then I just don't know what to tell you. I think most reasonable people would disagree with you.

Humans have actual thoughts and emotions; LLMs do not. The neural networks that LLMs use, while based conceptually in biological neural networks, are not biological neural networks. It is not a difference of complexity, but of kind.

Additionally, no matter how many statistics, CPU power, or data you give an LLM, it will not develop cognition because it is not designed to mimic cognition. It is designed to link words together. It does that and nothing more.

A dog is more sentient than an LLM in the same way that a human is more sentient than a toaster.

[–] Zormat@lemmy.blahaj.zone 3 points 1 year ago

We all want to believe that humans, or indeed animals as a whole, have some secret special sauce that makes us fundamentally distinguishable from statistical algorithms that approximate a best fit function according to some cost metric, but the fact of the matter is we don't.

There is no science to support the idea that biological neurons are particularly special, and there are reams and reams of papers suggestin that real neural cognition is little more than an extremely powerful statistical machine.

I don't care about what "most reasonable people" think. "Most reasonable people" don't have an opinion about the axiom of choice, or the existence of central pattern generators. That's not to devalue them but their opinions on things this far outside of their expertise are worth about as much as my opinions on the concept of art. I am a professional in neural computation, and I put it to you to even hypothesize about how animal neural computation is fundamentally distinct from LLM computation.

Like I said, we are wildly more capable than GPT, because our hardware is wildly more complex than any ANN, but the fundamental computing strategy is not all that different.

[–] Zormat@lemmy.blahaj.zone 3 points 1 year ago

In a more diplomatic reading of your post, I'll say this: Yes, I think humans are basically incredibly powerful autocomplete engines. The distinction is that an LLM has to autocomplete a single prompt at a time, with plenty of time between the prompt and response to consider the best result, while living animals are autocompleting a continuous and endless barrage of multimodal high resolution prompts and doing it quickly enough that we can manipulate the environment (prompt generator) to some level.

Yeah biocomputers are fucking wild and put silicates to shame. The issue I have is with considering biocomputation as something that fundamentally cannot be be done by any computational engine, and as far as neural computation is understood, it's a really sophisticated statistical prediction machine

[–] emptiestplace@lemmy.ml 1 points 1 year ago

"most reasonable people" - indirect ad hominem is still ad hominem. You are making a fool of yourself.