this post was submitted on 23 Oct 2023
82 points (100.0% liked)

196

16509 readers
2341 users here now

Be sure to follow the rule before you head out.

Rule: You must post before you leave.

^other^ ^rules^

founded 1 year ago
MODERATORS
 

KICK TECH BROS OUT OF 196

you are viewing a single comment's thread
view the rest of the comments
[–] Gaybees@artemis.camp 6 points 1 year ago (1 children)

I don’t think they said anything like that “it can’t be intelligent because it’s wrong sometimes”. It’s more like the AI doesn’t exist outside of the prompts you feed it. Humans can introspect, reflect on the actions we’ve done and question what effect our actions had on the situation. Humans can have desires, we can want to be more accurate, truthful in our actions, and reflect on how we might have failed doing this in the past. AI cannot do this. And we can do this outside of the prompt of a similar situation. AI only takes an input and then generates an output, wipes its hands, and calls it a day. It doesn’t matter if it gave you a correct answer, wrong answer, or gave you a completely illegible sentence.

[–] testfactor@lemmy.world 3 points 1 year ago

The previous guy and I agreed that you could trivially write a wrapper around it that gives it an internal monologue and feedback loop. So that limitation is artificial and easy to overcome, and has been done in a number of different studies.

And it's also trivially easy to have the results of its actions go into that feedback loop and influence its weights and models.

And is having wants and desires necessary to be an "intelligence"? That's getting into the philosophy side of the house, but I would argue that's superfluous.