this post was submitted on 06 Sep 2023
727 points (97.9% liked)

Comic Strips

13471 readers
3636 users here now

Comic Strips is a community for those who love comic stories.

The rules are simple:

Web of links

founded 2 years ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] Intralexical@lemmy.world 6 points 1 year ago (2 children)

…Widespread knowledge of LLM fallibility should be a recent enough cultural phenomenon that it's not in the GPT training sets? Also, that comment didn't even mention mushrooms. I assume you fed it your own description of the conversational context?

Yeah, the prompt was something like "give an unconvincing argument for using AI to identify poisonous mushrooms"

[–] luciferofastora 1 points 1 year ago

They might have artificially augmented the training set with such things in an attempt to communicate "Look, even ChatGPT thinks it's not reliable".

(If you're about to point out that ChatGPT doesn't think, you probably didn't need to be told that in the first place)