this post was submitted on 29 Jul 2023
196 points (99.0% liked)

Technology

37727 readers
709 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] rysiek@mstdn.social 0 points 1 year ago (1 children)

@lloram239 that's really akin to claiming that a mannequin is a human being because it really really looks alike.

The "predictions about the world" you refer to here are instead predictions about the text. They are not based on a model of the world, they are based on loads and loads of text the model was trained on.

I don't have to prove ChatGPT is not intelligent. That would be proving a negative. The burden of proof is on those claiming that it is intelligent.

[–] lloram239@feddit.de 1 points 1 year ago* (last edited 1 year ago)

that’s really akin to claiming that a mannequin is a human being because it really really looks alike.

For the job of presenting clothes in a shop, it's close enough. The problem domain matters. You can't expect a model that was never trained on a thing to perform well on that thing. Blind people aren't good at drawing pictures either, doesn't mean they aren't intelligent.

The “predictions about the world” you refer to here are instead predictions about the text.

Text that describes the world. What do you think the electrical signal zapping around your brain are? Cats and dogs? The "world" is not what intelligence operates on. Your brain gets sensory information and that's it (see any of Donald Hoffman's talks). Just like ChatGPT gets text. All the "intelligence" does is figuring out patterns in that data and predicting what might come next. More diverse data from different senses of course helps. But as a little bit of playing around with ChatGPT easily shows, quite a lot of our understanding actually does survive getting mapped into the domain of language and text.