this post was submitted on 18 Aug 2023
35 points (100.0% liked)

Technology

37702 readers
282 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
 

The tech giant is evaluating tools that would use artificial intelligence to perform tasks that some of its researchers have said should be avoided.

Google’s A.I. safety experts had said in December that users could experience “diminished health and well-being” and a “loss of agency” if they took life advice from A.I. They had added that some users who grew too dependent on the technology could think it was sentient. And in March, when Google launched Bard, it said the chatbot was barred from giving medical, financial or legal advice. Bard shares mental health resources with users who say they are experiencing mental distress.

you are viewing a single comment's thread
view the rest of the comments
[–] CanadaPlus@lemmy.sdf.org 1 points 1 year ago

People's reactions to new technology is famously hard to predict, but I guess it's worth considering.

AI is getting good at white-collar tasks way faster than blue-collar ones, too, so this might be how it looks at work. An app tells you to build or fix something with no context, you send back pictures or any comments and concerns, and then you get assigned the next task. Nobody really knows who they work for or why, exactly.