this post was submitted on 19 Nov 2024
1066 points (97.7% liked)

People Twitter

5396 readers
196 users here now

People tweeting stuff. We allow tweets from anyone.

RULES:

  1. Mark NSFW content.
  2. No doxxing people.
  3. Must be a pic of the tweet or similar. No direct links to the tweet.
  4. No bullying or international politcs
  5. Be excellent to each other.
  6. Provide an archived link to the tweet (or similar) being shown if it's a major figure or a politician.

founded 2 years ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] archomrade@midwest.social 0 points 1 month ago

The stuff I’ve seen AI produce has sometimes been more wrong than anything a human could produce. And even if a human would produce it and post it on a forum, anyone with half a brain could respond with a correction.

Seems like the problem is that you're trying to use it for something it isn't good or consistent at. It's not a dictionary or encyclopedia, it's a language model that happens to have some information embedded. It's not built or designed to retrieve information from a knowledge bank, it's just there to deconstruct and reconstruct language.

When someone on a forum says they asked GPT and paste its response, I will at the very least point out the general unreliability of LLMs, if not criticise the response itself (very easy if I’m somewhat knowledgeable about the field in question)

Same deal. Absolutely chastise them for using it in that way, because it's not what it's good for. But it's a bit of a frequency bias to assume most people are using it in that way, because those people are the ones using it in the context of social media. Those who use it for more routine tasks aren't taking responses straight from the model and posting it on lemmy, they're using it for mundane things that aren't being shared.

Anyway my point was merely that people do regularly misuse LLMs, and it’s not at all difficult to make them produce crap. The stuff about who should be blamed for the whole situation is probably not something we disagree about too much.

People misuse it because they think they can ask it questions as if it's a person with real knowledge, or they are using it precisely for it's convincing bullshit abilities. That's why I said it's like laughing at a child for giving a wrong answer or convincing them of a falsehood merely from passive suggestion - the problem isn't that the kid is dumb, it's that you're (both yourself and the person using it) going in with the expectation that they are able to answer that question or distinguish fact from fiction at all.