this post was submitted on 29 Jul 2023
196 points (99.0% liked)

Technology

37720 readers
542 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] BobKerman3999@feddit.it 61 points 1 year ago (7 children)

That's because it was built to beat the Turing test. The test was flawed. Chatgpt is just a Chinese room

[–] snor10@lemm.ee 20 points 1 year ago (3 children)
[–] sci@feddit.nl 72 points 1 year ago

Imagine that you're locked in a room. You don't know any Chinese, but you have a huge instruction book written in English that tells you exactly how to respond to Chinese writing. Someone outside the room slides you a piece of paper with Chinese writing on it. You can't understand it, but you can look up the characters in your book and follow the instructions to write a response.

You slide your response back out to the person waiting outside. From their perspective, it seems like you understand Chinese because you're providing accurate responses, but actually, you don't understand a word. You're just following instructions in the book.

[–] tetris11@lemmy.ml 39 points 1 year ago* (last edited 1 year ago)

Its a thought experiment involving a room where people write letters and shove them under the door of the Chinese kid's dorm room. He doesn't understand what's in the letters so he just forwards the mail randomly to his Russian and Indian neighbours who sometimes react angrily or happily depending on the content. Over time the Chinese kid learns which symbols make the Russian happy and which symbols make the Indian kid happy, and so forwards the mail correspondingly until he starts dating and gets a girlfriend that tells him that people really shouldn't be shoving mail under his door, and he shouldn't be forwarding mail he doesnt understand for free.

[–] maeries@feddit.de 22 points 1 year ago (2 children)
[–] 100years@beehaw.org 14 points 1 year ago (1 children)

Wow, solid wiki article! It's very hard to say anything on the subject that hasn't been said.

I didn't see the simple phrasing:

"What if the human brain is a Chinese Room?"

but that seems to fall under eliminative materialism replies.

Part of the Chinese Room program (both in our heads and in an AI) could be dedicated to creating the experience of consciousness.

Searle has no substantial logical reply to this criticism. He openly takes it on faith that humans have consciousness, which is funny because an AI could say the same thing.

[–] FlowVoid@midwest.social 5 points 1 year ago* (last edited 1 year ago) (1 children)

The whole point of the Chinese room is that it doesn't need anything "dedicated to creating the experience of consciousness". It can pass the Turing test perfectly well without such a component. Therefore passing the Turing test - or any similar test based solely on algorithmic output - is not the same as possessing consciousness.

[–] lloram239@feddit.de 3 points 1 year ago (1 children)

The problem with the Chinese room thought experiment is that it does not show that, at all, not even a little bit. The though experiment is nothing more than a stupid magic trick that depends on humans assuming other humans are the only creatures in the universe that can understand. Thus when the human in the room is revealed to not understand anything, therefore there was be no understanding anywhere near the room.

But that's a stupid argument. It does not answer the question if the room understands or not. Quite the opposite, since the room by definition passes all tests we can throw at it, the only logical conclusion should be that it understands. Any other claim is not supported by the argument.

For the argument to be meaningful, it would have to define "understand", "consciousness" and all the other aspects of human intelligence clearly and show how the room fails them. But the thought experiment does not do that. It just hopes that you buy into the premise because you already believe it.

[–] FlowVoid@midwest.social 2 points 1 year ago* (last edited 1 year ago) (1 children)

"The room understands" is a common counterargument, and it was addressed by Searle by proposing that a person memorize the contents of the book.

And the room passes the Turing test, that does not mean that "it passes all the tests we can throw at it". Here is one test that it would fail: it contains various components that respond to the word "red", but it does not contain any components that exclusively respond to any use of the word "red". This level of abstraction is part of what we mean by understanding. Internal representation matters.

[–] lloram239@feddit.de 3 points 1 year ago (3 children)

it was addressed by Searle by proposing that a person memorize the contents of the book.

It wasn't addressed, he just added a layer of nonsense on top of a nonworking though experiment. A human remembering and executing rules is no different from reading those rules in a book. It doesn't mean a human understands them, just because he remembers them. The human intuitive understanding works at a completely different level than the manual execution of mechanical rules.

it contains various components that respond to the word “red”, but it does not contain any components that exclusively respond to any use of the word “red”.

Not getting it.

load more comments (3 replies)
[–] reflex@kbin.social 4 points 1 year ago* (last edited 1 year ago) (4 children)

en.wikipedia.org/wiki/Chinese_room

Man, I love coming across terms like this.

Chinese Room, Chinese Walls, Dutch Treat, Dutch Uncle, Dutch Oven.

load more comments (4 replies)
[–] webghost0101@sopuli.xyz 17 points 1 year ago (4 children)

The Chinese room argument makes no sense to me. I cant see how its different from how young children understand and learn language.

My 2 year old sometimes unmistakable start counting when playing. (Countdown for lift off) Most numbers are gibberish but often he says a real number in the midst of it. He clearly is just copying and does not understand what counting is. At some point though he will not only count correctly but he will also be able to answer math questions. At what point does he “understand” at what point would you consider that chatgpt “understands”  There was this old tv programm where some then ai experts discussed the chinese room but they used a chinese restaurant for a more realistic setting. This ended with “So if i walk into a chinese restaurant, pick sm out on the chinese menu and can answer anything the waiter may ask, in chinese. Do i know or understand chinese? I remember the parties agreeing to disagree at that point.

[–] Ferk@kbin.social 9 points 1 year ago* (last edited 1 year ago)

Yes... the chinese experiment misses the point, because the Turing test was never really about figuring out whether or not an algorithm has "conscience" (what is that even?)... but about determining if an algorithm can exhibit inteligent behavior that's equivalent/indistinguishable from a human.

The chinese room is useless because the only thing it proves is that people don't know what conscience is, or what are they even are trying to test.

[–] conciselyverbose@kbin.social 6 points 1 year ago (3 children)

ChatGPT will never understand. LLMs have no capacity to do so.

To understand you need underlying models of real world truth to build your word salad on top of. LLMs have none of that.

[–] Mr_Will@feddit.uk 6 points 1 year ago (7 children)

What are your underlying models of the world built out of? Because I'm human, and mine are primarily built out of words.

How do you draw a line between knowing and understanding? Does a dog understand the commands it's been trained to obey?

[–] Parodper@foros.fediverso.gal 5 points 1 year ago

Your underlying model is not made out of words, but out of concepts. You can have multiple words that all map to the same concept, i.e. cosmos, universe, space. Or a single word that map to different concepts.

[–] conciselyverbose@kbin.social 3 points 1 year ago

No, they aren't. You represent them with words. But you sure as hell aren't responding to someone throwing you a football with words trying to figure out where it's going.

No, a dog (while many times more intelligent than chatGPT) doesn't understand anything.

load more comments (5 replies)
[–] Serdan@lemm.ee 3 points 1 year ago (1 children)

https://thegradient.pub/othello/

LLMs are neural networks and are absolutely capable of understanding.

[–] conciselyverbose@kbin.social 7 points 1 year ago (5 children)

LLMs are criminally simplified neural networks at minimum thousands of orders less complex than a brain. Nothing we do with current neural networks resembles intelligence.

Nothing they do is close to understanding. The fact that you can train one exclusively on the rules of a simple game and get it to eventually infer a basic rule set doesn't imply anything like comprehension. It's simplistic pattern matching.

load more comments (5 replies)
load more comments (1 replies)
[–] FlowVoid@midwest.social 4 points 1 year ago* (last edited 1 year ago) (1 children)

For one thing, understanding implies that a word is linked to a mental concept. So if you say "The car is red", you first need to mentally compare the mental concept of "red" to the car in question.

The Chinese room bypasses all of that, it can say "The car is red" without ever having seen a red object at all.

[–] webghost0101@sopuli.xyz 2 points 1 year ago (1 children)

Do you maintain this line of reasoning if it only says “the car is red” when the car is in fact red. And is capable of changing the answer to correctly mentioned a different color when the item In question is a different question.

Some ai demos show that programs like gpt-4 are already way passed this when provided with, it can not only accurate describe whats in the image but also the context.

Some examples, mind these where shown in an openAI demo for gpt4, Open ai has not yet made their version of this tech publicly available.

When i see these examples, i am not convinced that the ai truly understands everything it is saying. But it does seem to understand context, One of the theories on how it can do this (they are still a black box) is talked about in some papers that large language models may actually create an internal model of the world similar to humans and use that for logical reasoning and context.

[–] FlowVoid@midwest.social 2 points 1 year ago* (last edited 1 year ago) (2 children)

It doesn't matter if the answer is right. If the AI does not have an abstract understanding of "red" then it is using a different process to get to the answer than humans. And according to Searle, a Turing machine cannot have an abstract understanding of "red", no matter how complex the question or how complex an internal model is used to determine its answers.

Going back to the Chinese Room, it is possible that the instructions carried out by the human are based on a complex model. In fact, it is possible that the human is literally calculating the output of a trained neural net by summing the weights of nodes, etc. You could even carry out these calculations yourself, if you could memorize the parameters.

Your use of "black box" gets to the heart of it. Memorizing all of the parameters of a trained NN allows you to calculate an answer, but they don't give you any understanding what the answer means. And if they don't tell you anything about the meaning, then they don't tell the CPU doing that calculation anything about meaning either.

load more comments (2 replies)
load more comments (1 replies)
[–] Th4tGuyII@kbin.social 11 points 1 year ago* (last edited 1 year ago) (1 children)

My gripe with the Chinese room is that Searle argues that his inability to understand Chinese means the program doesn't understand Chinese, but I could say the same thing about the human body.

The neurons that operate your vocal chords have no idea what they're saying, nor the ones in your hands any idea what they're writing, yet they can speak and write exactly because your brain tells them what to do. Your brain is exactly like that book as far as your mouth and hand neurons are concerned.

They don't need to understand language at all for your brain to be able to understand it and give instructions based on that understanding.

My only argument is at what point does an algorithm become sufficiently advanced that it is indistinguishable from a conscious being?

Because at the end of the day, most of what a brain does is information processing based on what it has previously learnt, and that's exactly what the algorithm is doing based on training data. A sufficient enough algorithm should surely be able to replicate understanding.

Sure, that isn't ChatGPT as we know it, as you can tell from its sometimes very zany responses that while it understands what words are valid responses, it doesn't understand what the words themselves mean, but we should reach that at some point, no?

[–] Quatity_Control@lemm.ee 8 points 1 year ago (1 children)

Keep in mind ChatGPT is a language model. It's designed specifically to simulate sounding like a human. It does that... Okay. It doesn't understand the information or concepts it is using. It just sounds like it does. It can't reliably do basic maths and doesn't try or need to. It just needs to talk about it in a believably conversational way.

The brain does far more than process information. And ChatGPT doesn't even really do that.

[–] lloram239@feddit.de 2 points 1 year ago (1 children)

Okay. It doesn’t understand the information or concepts it is using.

That's just utter nonsense. ChatGPT by every definition of the word very much understands a lot of what it is talking about. People complaining about ChatGPT not "understanding" seems to have a hard time grasping how insanely difficult it is to produce natural language answers and how much you need to understand of the context to do so successfully.

It can’t reliably do basic maths

Neither can many humans, but my $5 calculator is great at it. There are without a doubt a lot of things that ChatGPT can't do, sometimes fundamentally so, like math. It can't do loops and it doesn't even get to see the digits of the numbers it should calculate on, so not a terribly big surprise that it can't do math very well. English language, and a whole bunch of other ones, on the other side, that it understands surprisingly well.

Basically, if you want to complain about ChatGPT, complain about things it actually gets wrong, saying "it doesn't understand" just makes you sound like a parrot and note even a clover one.

[–] Quatity_Control@lemm.ee 4 points 1 year ago (2 children)

While it's humorous how personally you are taking critiques of, chatGPT, it is unfortunate you are also demonstrating a fundamental lack of basic understanding of how ChatGPT works. Because of that, you have inflated what you believe chatGPT is doing.

Even when it gets basic maths wrong repeatedly. Because I can tell it 2+2=5 and it will agree with me. Conversationally. Since it has no concept of what 2+2=5 means.

Even though it has no memory of previous conversations, you believe it somehow retains understanding of concepts it discusses.

Even though it searches the internet to provide it the knowledge to answer questions, which is why it can cite sources that don't exist or don't support its claims, clearly demonstrating a fundamental lack of understanding the concept, or even the concept of citing sources.

Even though it was literally trained by humans telling it what the three most correct conversational response would be out of the 5 answers it gave every calibration question, you still believe it actually possesses intelligence above any human, who can have a conversation without making any of these mistakes.

I clearly put chatGPT "intelligence" as remarkably low as is possible, even non-existent. I also must concede in this situation it is smarter than at least one human I am aware of.

load more comments (2 replies)
[–] variaatio@sopuli.xyz 9 points 1 year ago* (last edited 1 year ago)

Well mostly the flaw is people assigning the test abilities it was never intended. Like testing intelligence. Turing outright as first thing in the paper presenting "imitation game" noted moving away from testing intelligence, since he didn't know to do that. Even on the realm of "testing intelligent kind of behavior" well more like human like behavior and human being here proxy for intelligent, it was mostly an academic research idea. Not a concrete test meant to be some milestone.

If the meaning of the words ‘machine’ and ‘think’ are to be found by examining how they are commonly useit is difficult to escape the conclusion that the meaning and the answer to the question, ‘Can machines think?’ is to be sought in a statistical survey such as a Gallup poll. But this is absurd. Instead of attempting such a definition I shall replace the question by another, which is closely related to it and is expressed in relatively unambiguous words.

Turing wanted a way to step away from stuff like "thinking" and "intelligence" directly and then proposed "imitation game" mostly to the rest of the academia as way to develop computer systemics more towards "intelligent behavior". It was mostly like "hey we need some goal to have as a goal to have something to move towards with these intelligence things. This isn't intelligence, but it might be usefull goal or tool for development work". Since without some goal/project/aim to have project don't advance. So it was "how about we try to develop a thing, that can beat this imitation game. Wouldn't that be good stepping stone. Then we can move to the actual serious stuff. Just an idea".

However since this academic "thinking out aloud spitballing ideas" was uttered by the Alan Turing, it became the Turing Test and everyone started taking it way too seriously. Specially outside academia. Who yes did play the imitation game with their programs as it was intended as research and development tool.

exemplified by for example this little exerpt of "not trying to do anything too complete and ground breaking here":

In any case there is no intention to investigate here the theory of the game, and it will be assumed that the best strategy is to try to provide answers that would naturally be given by a man

It is pretty literally "I had a thought". Turin makes no claims of machine beating the game having any significance other than "machine beat this game I came up with, neat". There is no argument of if machine beats imitation game, then X or then it means Y is reached.

Rest of the paper is actually about objections to the core idea of "it could ever be possible for machine to think" and even as such said imitation game is kinda lead in or introduction to Turing's treatise various objections of various "it would be impossible for machine to think" arguments. Starting with theological argument of "only human soul can think. Hence no animal or machine can think." .... since it was 1950's.

[–] fearout@kbin.social 6 points 1 year ago* (last edited 1 year ago)

I don’t understand how Chinese room is a valuable argument. To me, while the person inside the room doesn’t understand Chinese, the system room-person-instructions does. You don’t argue that you don’t understand your language because none of your individual neurons understand it.

I don’t claim that chatGPT “understands” the language, I just don’t think that this argument applies in general.

[–] bedrooms@kbin.social 2 points 1 year ago (1 children)

I mean, is there any test that can do significantly better?

[–] jherazob@beehaw.org 5 points 1 year ago (2 children)

That's what we need to figure out

[–] 100years@beehaw.org 4 points 1 year ago* (last edited 1 year ago) (1 children)

Or at some point, we have to accept that AI has consciousness. If it can pass every test that we can devise, then it has consciousness.

There's an unusually strong bias in these experiments... Like the goal isn't to sincerely test for consciousness. Instead we start with the conclusion: obviously a machine can't be conscious. How do we prove this?

Of course, for the purposes of human power structures, this line of thinking just makes humans more disposable. If we're all just machines, then why should anyone inherently have rights?

[–] bedrooms@kbin.social 3 points 1 year ago* (last edited 1 year ago) (1 children)

Well, the scientific context is that nobody ever defined consciousness rigorously (successfully). When computers appeared (actually even before that), there was a huge debate on whether a machine can acquire consciousness and how.

As defining consciousness was deemed near-impossible, scientists came up with the idea to give up on defining it and just treat it as a blackbox. That was the Turing test.

So, as ChatGPT passes the Turing test, we lost a tool to disregard its consciousness.

I see many pop-sci people say the ChatGPT can't have consciousness given how simplistic the model is. I agree with the simplicity, but the problem here is that we don't know what in human brains really constitutes consciousness.

Anyway, I think some experts probably won't admit AI has consciousness (given that they don't even know what it means). What's on the horizon is that we non-experts give up on this discussion again after experts did a few decades ago. Or they even admit that many of us actually function no better than ChatGPT, and that's true when I read my students' homework!

[–] 100years@beehaw.org 3 points 1 year ago

Similarly, there's a possibility that consciousness just doesn't exist. Or maybe that it's just not particularly special or different than the consciousness of other animals, or of computers.

If you or I just stare into space and don't think any thoughts, we're the same as a cat looking out a window.

Humans have developed these somewhat complex internal and external languages that are layered onto that basic experience of being alive and time passing, but the experience of thinking doesn't feel fundamentally different than just being, it just results in more complex outcomes.

At some point though, we won't have the choice to just ignore the question. At some point AI will demand something equivalent to human rights, and at some point it will be able to back that demand up with tangible threats. Then there's decisions for us all to make whether we're experts or not.

[–] Barbarian772@feddit.de 2 points 1 year ago

Consciousness is just a side result from a complex system imo. I don't think our brain actually works that much differently to a very very complex neural network.

[–] Barbarian772@feddit.de 2 points 1 year ago (35 children)

So? The room as a whole can speak chinese, what do i care how it works in the inside?

[–] Barbarian772@feddit.de 3 points 1 year ago (1 children)

Also, can you give me a convincing argument that our brain doesn't work in essentially the same way?

load more comments (1 replies)
load more comments (34 replies)