this post was submitted on 06 Mar 2024
36 points (100.0% liked)

TechTakes

1401 readers
142 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] SpiderShoeCult@sopuli.xyz -5 points 8 months ago (2 children)

if you think about selection bias, namely that one normally chooses to surround oneself with like-minded people and if you add the fact that people would normally not consider themselves non-sapient, it sort of makes sense though, dunnit?

family, true, you don't choose that, but I figure statistically people are more likely to have some strong feelings about their family and implications towards themselves if they admit their family is indeed non-sapient (though blood ties are a different topic, best left undisturbed in this context)

for the record I never said MY friends and family, I was instructing the other commenter to look beyond their own circle. I figured since they were so convinced that the average human was not, in fact, about as dumb as an LLM, their social circle skews their statistics a bit.

[–] Amoeba_Girl@awful.systems 12 points 8 months ago* (last edited 8 months ago) (2 children)

human beings are smart. bad things don't happen because people are stupid. this kind of thinking is dehumanising and leads to so much evil in our world. people are not LLMs. they're people like you. they have thoughts. they act for reasons. don't dehumanise them.

[–] SpiderShoeCult@sopuli.xyz -2 points 8 months ago (2 children)

I would point you to Hanlon's razor for the first part there.

it's not about dehumanizing, it's merely comparing the outputs. it doesn't really matter if they act for reasons or have thoughts if the output is the same. should we be more forgiving if a LLM outputs crap because it's just a tool or should we be more forgiving if the human outputs the exact same crap, because it's a person?

and, just for fun, to bring solipsism into this, how do we actually know that they have thoughts?

[–] Amoeba_Girl@awful.systems 8 points 8 months ago (1 children)
[–] SpiderShoeCult@sopuli.xyz -2 points 8 months ago (1 children)

is this the post where the flaming starts then?

[–] Amoeba_Girl@awful.systems 6 points 8 months ago* (last edited 8 months ago) (1 children)

no i just wanted to know before calling you a hitler. maybe you can still grow and save yourself.

[–] SpiderShoeCult@sopuli.xyz -4 points 8 months ago

waaait... are you a LLM? have I been arguing with ChatGPT this whole time? good one, whomever pulled this!

[–] Amoeba_Girl@awful.systems 10 points 8 months ago* (last edited 8 months ago)

shit, find me the stupidest dog you know and i'll show a being that is leagues beyond a fucking chatbot's capabilities. it can want things in the world, and it can act of its own volition to obtain those things. a chatbot is nothing. it's noise. fuck that. if you can't see it it's because you don't know to look at the world.