this post was submitted on 21 Jan 2024
334 points (98.8% liked)

Humor

7453 readers
5 users here now

"Laugh-a-Palooza: Unleash Your Inner Chuckle!"

Rules


Read Full Rules Here!


Rule 1: Keep it light-hearted. This community is dedicated to humor and laughter, so let’s keep the tone light and positive.


Rule 2: Respectful Engagement. Keep it civil!


Rule 3: No spamming!


Rule 4: No explicit or NSFW content.


Rule 5: Stay on topic. Keep your posts relevant to humor-related topics.


Rule 6: Moderators Discretion. The moderators retain the right to remove any content, ban users/bots if deemed necessary.


Please report any violation of rules!


Warning: Strict compliance with all the rules is imperative. Failure to read and adhere to them will not be tolerated. Violations may result in immediate removal of your content and a permanent ban from the community.


We retain the discretion to modify the rules as we deem necessary.


founded 1 year ago
MODERATORS
 

A European delivery company had to disable its AI chatbot after it started swearing at a customer and admitting it was the “worse delivery firm in the world.”

you are viewing a single comment's thread
view the rest of the comments
[–] NigelFrobisher@aussie.zone 4 points 10 months ago (1 children)

AI turned into another clownshoes scam bubble in record time.

[–] fckreddit@lemmy.ml 1 points 10 months ago (1 children)

AI is actually interesting, when applied correctly. Basically, the kind of models AI uses are what I call statistical pattern recognition. They kind of map specific inputs to specific outputs. The mapping depends on the training data. Meaning they get an input, they basically generate an output. But these models don’t really understand the meanings of input query or the output answer in the sense a human does. Because these models don’t have context or a worldview, just input to output mapping.

Another limitation is that these models don’t don’t have a sense for truth or falsity. Humans have many mechanisms to determine truth or falsity of a statement. They range from just believing in the truth or falsity of a statement without any critical thinking applied to actually, conducting research to determine the truth. Machine learning don’t have any such mechanisms. In a sense, they will accept any statement even contradictory statements, to put it loosely, in the training data as truth by applying statistical weights to it.

AI can be used to compress a lot of raw data into something that can be quickly queried. But actually using AI for chatbots, which handle complex queries from humans or using AI for creating images or works of art is bound to be disastrous. Too bad, money people don’t understand that. They probably will soon enough.

[–] Default_Defect@midwest.social 4 points 10 months ago

So, very much like crypto, it had good, practical use cases, largely ignored in favor of get rich quick schemes and will be dumped by tech bros the very minute a new scheme pops up.

The difference is that crypto was a solution looking for a problem, whereas "AI" actually has a use.