this post was submitted on 17 Nov 2024
138 points (96.6% liked)
Technology
59402 readers
3761 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
They seem to have a foregone conclusion that AI is a positive thing, rather than something that should be eradicated like smallpox or syphilis.
"Responsible use of AI" could mean things like providing small offline models for client-side translation. They're actually building that feature and the preview is already amazing.
Not just building it's shipping by default. That is, language detection and code that displays a popup asking you whether you want to download the actual translation model is shipping by default. About twelve megs per model, so 24 for a language pair.
IMO, there's no such thing as responsible AI use. All of the uses so far are bad, and I can't see any that would work as well as a trained human. Even worse, there's zero accountability; when an AI makes a mistake and gets people killed, no executives or programmers will ever face any criminal charges because the blame will be too diffuse.
There is no gray. Only black and white!
So who should be held accountable when (mis)use of AI results in a needless death? Or worse?
Let's say a company creates an AI taxi that runs you over leaving you without legs. Who are you going to sue?
"Oh it's grey, so I'll have a dollar from each shareholder." That doesn't sound right to me.
Who's getting killed because of the "translate page" button in my browser?
The "translate page" button in my browser is evil? Get a grip.
There are valid uses for AI. It is much better at pattern recognition than people. Apply that to healthcare and it could be a paradigm shift in early diagnosis of conditions that doctors wouldn't think to look for until more noticeable symptoms occur.
You're going to upset a lot of chess players if you get rid of all AI.
It's because it is a positive thing. Just because awful businesses hijacked and abused it doesn't mean it's all bad. Mozilla is approaching it in a positive way imo.
And what, exactly, is positive about it, that has no associated negative outcomes?
Specific to generative AI, I think client side generation can be a good thing, such as sentiment analysis or better word suggestions/autocomplete.
A number of other helpful tasks have negative outcomes, but if someone is going to use it, then I prefer they use the version of the tech that minimizes those negative outcomes. Whether Mozilla should be focussing on building that is a different matter though
AI that isn't generative AI has a lot of positive uses, but usually that's not what these discussions are about