this post was submitted on 23 Dec 2023
193 points (86.7% liked)

Technology

59446 readers
4837 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] huginn@feddit.it 180 points 11 months ago (21 children)

Friendly reminder that your predictive text, while very compelling, is not alive.

It's not a mind.

[–] Poggervania@kbin.social 84 points 11 months ago (2 children)

Cyberpunk 2077 sorta explores this a bit.

There’s a vending machine that has a personality and talks to people walking by it. The quest chain basically has you and the vending machine chatting a bit and even giving the vending machine some advice on a person he has a crush on. You eventually become friends with this vending machine.

When it seems like it’s becoming more apparent it’s an AI and is developing sentience, it turns out the vending machine just has a really well-coded socializing program. He even admits as much when he’s about to be deactivated.

So, to reiterate what you said: predictive text and LLMs are not alive nor a mind.

[–] dlpkl@lemmy.world 47 points 11 months ago* (last edited 11 months ago)

I don't care, Brandon was real to me okay 😭

[–] billwashere@lemmy.world 22 points 11 months ago (1 children)

Which is why the Turing Test needs to be updated. These text models are getting really good at fooling people.

[–] bionicjoey@lemmy.ca 18 points 11 months ago (1 children)

The Turing test isn't just that there exists some conversation you can have with a machine where you wouldn't know it's a machine. The Turing test is that you could spend an arbitrary amount of time talking to a machine and never be able to tell. ChatGPT doesn't come anywhere close to this, since there are many subjects where it quickly becomes clear that the model doesn't understand the meaning of the text it generates.

[–] Corgana@startrek.website 7 points 11 months ago* (last edited 11 months ago)

Exactly thank you for pointing this out. It also assumes that the tester would have knowledge of the wider context in which the test exists. GPT could probably fool someone from the middle ages, but that person wouldn't know anything about what it is they are testing for exactly.

[–] CrayonRosary@lemmy.world 19 points 11 months ago (4 children)

Prove to me you have a mind and I'll accept what you're saying.

[–] penguin@sh.itjust.works 30 points 11 months ago (18 children)

Well no one can prove they have a mind to anyone other than themselves.

And to extend that, there's obviously a way for electrical information processing to give rise to consciousness. And no one knows how that could be possible.

Meaning something like a true, alien AI would probably conclude that we are not conscious and instead are just very intelligent meat computers.

So, while there's no reason to believe that current AI models could result in consciousness, no one can prove the opposite either.

I think the argument currently boils down to, "we understand how AI models work, but we don't understand how our minds work. Therefore, ???, and so no consciousness for AI"

[–] General_Effort@lemmy.world 29 points 11 months ago

“No brain?”

“Oh, there’s a brain all right. It’s just that the brain is made out of meat! That’s what I’ve been trying to tell you.”

“So … what does the thinking?”

“You’re not understanding, are you? You’re refusing to deal with what I’m telling you. The brain does the thinking. The meat.”

“Thinking meat! You’re asking me to believe in thinking meat!”*

load more comments (17 replies)
[–] LWD@lemm.ee 1 points 11 months ago* (last edited 10 months ago)
[–] bionicjoey@lemmy.ca 1 points 11 months ago (2 children)

I can prove to you ChatGPT doesn't have a mind. Just open up the Sunday Times Cryptic Crossword and ask ChatGPT to solve and explain the clues.

[–] OrderedChaos@lemmy.world 10 points 11 months ago (2 children)

I'm confused by this idea. Maybe I'm just seeing it from the wrong point of view. If you asked me to do the same thing I would fail miserably.

[–] KairuByte@lemmy.dbzer0.com 5 points 11 months ago

Not the original intent, but you’d likely immediately throw your hands up and say you don’t know, an LLM would hallucinate an answer.

[–] bionicjoey@lemmy.ca 1 points 11 months ago (1 children)

But some humans can, since they require simultaneous understanding of words' meanings as well as how they are spelled

[–] General_Effort@lemmy.world 2 points 11 months ago

What should we conclude about most humans who cannot solve these crosswords?

It should be relatively easy to train an LLM to solve these puzzles. I am not sure what that would show.

[–] General_Effort@lemmy.world 1 points 11 months ago

Can you please explain the reasoning behind the test?

load more comments (1 replies)
[–] JayDee@lemmy.ml 10 points 11 months ago (1 children)

I don't think most people will care, so long as their NPC interaction ends up compelling. We've been reading stories about people who don't exist for centuries, and that's stopped no one from sympathizing with them - and now there's a chance you could have an open conversation with them.

Like, I think alot of us assume that we care about the authors who write the character dialogs but I think most people actually choose not to know who is behind their favorite NPCs to preserve some sense that the NPC personality isn't manufactured.

Combine that with everyone becoming steadily more lonely over the years, and I think AI-generated NPC interactions are going to take escapism to another level.

[–] PsychedSy@sh.itjust.works 2 points 11 months ago (1 children)

Poem poem poem poem then the NPC start quoting Mein Kampf and killing all the cat wizards.

[–] JayDee@lemmy.ml 1 points 10 months ago (1 children)

Lol, yeah. If generative AI text stays as shitty as it is now, then this whole discussion moot. Whether that will be the case has yet to be seen. What is an indisputable fact, though, is that right now is the worst that generative AI will ever be again. It's only able to improve from here.

[–] Barbarian@sh.itjust.works 1 points 10 months ago (1 children)

It's only able to improve from here.

That isn't actually true. With the rise in articles, posts and comments written by these algorithms, experts are warning about model collapse. Basically, the lack of decent human-written training data will destroy future generative AI before it can even start.

[–] JayDee@lemmy.ml 2 points 10 months ago

That's an interesting point. We are seeing a similar kind of issue with search engines losing effectiveness due to search engine optimization on websites.

So it is possible that generative AI will become enshittened.

[–] Bluehat@lemmynsfw.com 0 points 11 months ago* (last edited 11 months ago) (1 children)
[–] Bernie_Sandals@lemmy.world 1 points 11 months ago

If you cut out a tiny bit of someone's brain and then hooked it up to a cpu, would it be a mind? No, of course not, lol. Even if we got Biocomputers to work, we still wouldn't have any synthetic hardware even close to being strong or fast enough to actually create or even simulate a brain.

load more comments (17 replies)