this post was submitted on 22 Jul 2023
167 points (85.5% liked)
Asklemmy
43892 readers
922 users here now
A loosely moderated place to ask open-ended questions
Search asklemmy ๐
If your post meets the following criteria, it's welcome here!
- Open-ended question
- Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
- Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
- Not ad nauseam inducing: please make sure it is a question that would be new to most members
- An actual topic of discussion
Looking for support?
Looking for a community?
- Lemmyverse: community search
- sub.rehab: maps old subreddits to fediverse options, marks official as such
- !lemmy411@lemmy.ca: a community for finding communities
~Icon~ ~by~ ~@Double_A@discuss.tchncs.de~
founded 5 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Same. 5 minutes after installing Copilot I literally said out loud, "Well.. I'm never turning this off."
It's one of the nicest software releases in years. And it's instantly useful too.. No real adjustment period at all.
I tried it for a couple months and it was alright but eventually it got too frustrating. I did love how well it did some really repetitive things. But rarely did it actually get anything complex 100% right. In computing, "almost right" is wrong. But because it was so close, it was hard to spot the mistakes.
There were cases where my IDE knew the right answer but Copilot did not. Realizing that Copilot was messing up my IDE enhancements to produce code I was painfully babysitting, I cancelled it.
This is the most insidious conundrum related to AI usage. At the end of the day, a LLM's top priority is to ensure that your question is answered in a way that satisfies that model. The accuracy of its answers are a secondary concern. If forced to choose between making up BS so it can have a response that looks right versus admitting it doesn't have enough information to answer, it can and often will choose the former. Thus the "hallucination" problem was born.
The chance of getting your answer lightly sprinkled with made up stuff is disturbingly high. This transfers the cognitive load of the AI user from "what is the answer" to "I must repeatedly go verify everything in this answer because I can't trust it".
Not an insurmountable obstacle, and they will likely solve it sooner rather than later, but AI right now is arguably the perfect extension of the modern internet - take absolutely everything you read with at least a grain of salt... and keep a pile of salt cubes close by.