this post was submitted on 20 Dec 2024
66 points (93.4% liked)

Fuck AI

1514 readers
9 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

founded 9 months ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] justOnePersistentKbinPlease@fedia.io 4 points 4 days ago (1 children)

Show me documentation of any of this actually happening and being effective. @

E.G. Dell has had automated logistics for more than 20 years. LLMs would make it less efficient, since they aren't anywhere near as fast or efficient as regular programs. And they hallucinate. Ditto Ikea and a few others for that matter. E.G.2. LLMs cannot and will not "fine tune" robotic movements. The movement of a robotic arm is either hand-programmed, or done with a mathematical process called Inverse Kinematics to move them between two points. They are already fine tuned.

You don't need vision systems in a warehouse. That's what QR and barcode scanners are for.

[–] 13esq@lemmy.world 1 points 1 day ago (1 children)

It doesn’t necessarily contradict but adds nuance to the conversation. LLMs shine in areas like logistics, data analysis, and workflow automation, despite their role in direct robotic control or real-time precision tasks is limited.

Where the confusion might arise is that while LLMs can contribute to robotics—like interpreting natural language commands or generating code—they aren’t a substitute for core movement algorithms like inverse kinematics. In other words, LLMs enhance certain aspects around robotics and automation but don't replace the specialized systems already in place for critical tasks.

The focus is more on integration and augmentation, not replacement.

Proof. I am asking for anything for you to back up what you are claiming.

All of those things require context. Something LLMs cannot ever understand; it is a hard limit of the statistical analysis that LLMs use.

Edit: also, here is proof of cognitive decline brought about as a result of feeding LLM outputs back into LLM models: https://bmjgroup.com/almost-all-leading-ai-chatbots-show-signs-of-cognitive-decline/