this post was submitted on 18 Jun 2024
94 points (100.0% liked)

TechTakes

1493 readers
147 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
 

I followed these steps, but just so happened to check on my mason jar 3-4 days in and saw tiny carbonation bubbles rapidly rising throughout.

I thought that may just be part of the process but double checked with a Google search on day 7 (when there were no bubbles in the container at all).

Turns out I had just grew a botulism culture and garlic in olive oil specifically is a fairly common way to grow this bio-toxins.

Had I not checked on it 3-4 days in I'd have been none the wiser and would have Darwinned my entire family.

Prompt with care and never trust AI dear people...

you are viewing a single comment's thread
view the rest of the comments
[–] Dirk@lemmy.ml 55 points 6 months ago (28 children)

never trust AI

Statements from LLMs are to be seen as hallucinations unless proven otherwise by classic research.

[–] snooggums@midwest.social 39 points 6 months ago (27 children)

We don't need a fancy word that makes it sound like AI is actually intelligent when talking about how AI is frequently wrong and unreliable. AI being wrong is like someone who misunderstood something or took a joke as literal repeating it as factual.

When people are wrong we don't call it hallucinating unless their senses are altered. AI doesn't have senses.

[–] slopjockey@awful.systems 20 points 6 months ago

Does everyone else see this? These are the exact type of out of town haters we really want. I also think calling LLMs all but delusional is too generous and I mean that unironically.

load more comments (26 replies)
load more comments (26 replies)