Ask Lemmy
A Fediverse community for open-ended, thought provoking questions
Rules: (interactive)
1) Be nice and; have fun
Doxxing, trolling, sealioning, racism, and toxicity are not welcomed in AskLemmy. Remember what your mother said: if you can't say something nice, don't say anything at all. In addition, the site-wide Lemmy.world terms of service also apply here. Please familiarize yourself with them
2) All posts must end with a '?'
This is sort of like Jeopardy. Please phrase all post titles in the form of a proper question ending with ?
3) No spam
Please do not flood the community with nonsense. Actual suspected spammers will be banned on site. No astroturfing.
4) NSFW is okay, within reason
Just remember to tag posts with either a content warning or a [NSFW] tag. Overtly sexual posts are not allowed, please direct them to either !asklemmyafterdark@lemmy.world or !asklemmynsfw@lemmynsfw.com.
NSFW comments should be restricted to posts tagged [NSFW].
5) This is not a support community.
It is not a place for 'how do I?', type questions.
If you have any questions regarding the site itself or would like to report a community, please direct them to Lemmy.world Support or email info@lemmy.world. For other questions check our partnered communities list, or use the search function.
6) No US Politics.
Please don't post about current US Politics. If you need to do this, try !politicaldiscussion@lemmy.world or !askusa@discuss.online
Reminder: The terms of service apply here too.
Partnered Communities:
Logo design credit goes to: tubbadu
view the rest of the comments
Preface: I work in AI, and on LLM's and compositional models.
None, frankly. Where AI will be helpful to the general public is in providing tooling to make annoying tasks (somewhat) easier. They'll be an assisting technology, rather than one that can replace people. Sadly, many CEO's, including the one where I work, either outright lie or are misled into believing that AI is solving many real-world problems, when in reality there is very little or zero tangible involvement.
There are two areas where (I think) AI will actually be really useful:
Healthcare, particularly in diagnostics. There is some cool research here, and while I am far removed from this, I've worked with some interns that moved on to do really cool stuff in this space. The benefit is that hallucinations can actually fill in gaps, or potentially push towards checking other symptoms in a conversational way.
Assisting those with additional needs. IMO, this is where LLM's could be really useful. They can summarize huge sums of text into braille/speech, they can provide social cues for someone that struggles to focus/interact, and one surprising area where they've been considered to be great (in a sad but also happy way) is in making people that rely on voice assistants feel less lonely.
In both of these areas you could argue that a LLM might replace a role, although maybe not a job. Sadly, the other side to this is in the American executive mindset of "increasing productivity". AI isn't a push towards removing jobs entirely, but squeezing more productivity out of workers to enable the reduction of labor. It's why many technological advancements are both praised and feared, because we've long reached a point where productivity is as high as it has ever been, but with jobs getting harder, pay becoming worse and worse, and execs becoming more and more powerful.
I was super nervous AI would replace me, a programmer. So i spent a long time learning, hosting, running, and coding with models, and man did I learn a lot, and you're spot on. They're really cool, but practical applications vs standard ML models are fairly limited. Even the investors are learning that right now, that everything was pure hype and now we're finding out what companies are actually using AI well.
There are a fair number of "developers" that I think will be displaced.
There was a guy on my team from an offshoring site. He was utterly incompetent and never learned. He produced garbage code that didn't work. However he managed to stay in for about 4 years, and even then he left on his own terms. He managed to go 4 years and a grand total of 12 lines of code from him made it into any codebase.
Dealing with an LLM was awfully familiar. It reminded me of the constant frustration of management forcing me to try to work with him to make him productive. Excrpt the LLM was at least quick in producing output, and unable to go to management and blame everyone else for their shortcomings.
He's an extreme case, but in large development organizations, there's a fair number of mostly useless developers that I think LLM can rationalize away to a management team that otherwise thinks "more people is better and offshoring is good so they most be good developers".
Also, enhanced code completion where a blatantly obvious input is made less tedious to input.
I'll give you that one. LLMs in their current state help me write code that otherwise I would be putting off or asking someone else to do. Not because it's hard but because I've done it 1000 times and I find it tedious, and I'd expect an entrylevel/jr to take it with stride. Even right now I'm using it to write some python code that otherwise I just don't want to write. So, I guess it's time to uplevel engineers. The bar has been raised, and not for the first time in our careers.