this post was submitted on 26 Jun 2023
54 points (100.0% liked)

Technology

37719 readers
114 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
 

Tech CEOs want us to believe that generative AI will benefit humanity. They are kidding themselves

you are viewing a single comment's thread
view the rest of the comments
[–] hglman@lemmy.ml 2 points 1 year ago (2 children)

It will shift a lot of human effort from generative to review. For example the core role of an engineer in many ways already is validation of a plan. Well that will become nearly the only role.

[–] rustyspoon@beehaw.org 1 points 1 year ago (1 children)

the core role of an engineer in many ways already is validation of a plan.

I disagree, this implies that AI are doing a lot more than they actually are. Before you design the physical layout of some thing, you have to identify a problem, and identify guidelines and empirical metrics against which you can compare your design to determine efficacy. This is half the job for engineers.

There's one step of the design process that I see current AI completing autonomously (implementation), and I view it as nontrivial to get the technology working higher up on the "V".

[–] hglman@lemmy.ml 1 points 1 year ago

Agreed. Its a more impactful on software than physical engineering (untill robots can build more arbitrary objects) but that is my point, implementation is only a small part of the job.

[–] ABoxOfNeurons@lemmy.one 0 points 1 year ago (1 children)

That assumes that the classes of problems that AI's can solve remains stagnant. I don't think that's a good assumption, especially given that GPT4 can already self-review and refine its output.

[–] hglman@lemmy.ml 1 points 1 year ago (1 children)

It will take a very long time for people to believe and trust AI. That's just the nature of trust. It may well surpass humant in always soon, but trust will take much more time. What would be required for an AI designed bridge be accepted without review by a human engineer?

[–] ABoxOfNeurons@lemmy.one 1 points 1 year ago

We'll probably see sooner or later.