this post was submitted on 27 Sep 2024
1344 points (99.4% liked)
Technology
60131 readers
2977 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
A tool is a tool. It has no say in how it's used. AI is no different than the computer software you use browse the internet or do other digital task.
When its used badly as an outlet for escapism or substitute for social connection it can lead to bad consequences for your personal life.
When it's best used is as a tool to help reason through a tough task, or as a step in a creative process. As on demand assistance to aid the disabled. Or to support the neurodivergent and emotionally traumatized to open up to as a non judgemental conversational partner. Or help a super genius rubber duck their novel ideas and work through complex thought processes. It can improve peoples lives for the better if applied to the right use cases.
Its about how you choose to interact with it in your personal life, and how society, buisnesses and your governing bodies choose to use it in their own processes. And believe me, they will find ways to use it.
I think comparing llms to computers in 90s is accurate. Right now only nerds, professionals, and industry/business/military see their potential. As the tech gets figured out, utility improves, and llm desktops start getting sold as consumer grade appliances the attitude will change maybe?
That is a miopic view. Sure a tool is a tool, if I take a gun and use it to save someone from getting mugged = good if I use it to mug someone = bad
But regardless of the circumstance of use, we can all agree that a gun's only utility is to destroy a living organism.
You know, I know, everyone here knows, AI will only be used to generate as much profit as possible in the shortest amount of time, regardless of the harm it causes. And right now, the big promise of AI is that it will replace costly human employees, that's it, that's all.
Fortunately, it is really bad and unlikely to achieve this goal
A better analogy is search engines. It’s just another tool, but
When I started as a software engineer, my detailed knowledge was most important and my best tool was the manuals. Now my most important tools are search engines and autocomplete: I can work faster with less knowledge of the syntax and my value is the higher level thought about what we need to do. If my company ever allows AI, I fully expect it to be as important a tool as a search engine.
And this is when the cost calculation comes into play. Using a search engine is basically free, using OpenAI for development is tied up with licenses and new hardware.
So the question will be, are you going to improve efficiency to the point where the cost of the license and new hardware is worth the additional efficiency?
Currently my company is more concerned with intellectual privacy, security, liability. Of course that means they’ll only allow ai where they can pay for guarantees, and that brings us back to the cost.