this post was submitted on 06 Nov 2023
121 points (100.0% liked)

World News

22057 readers
101 users here now

Breaking news from around the world.

News that is American but has an international facet may also be posted here.


Guidelines for submissions:

These guidelines will be enforced on a know-it-when-I-see-it basis.


For US News, see the US News community.


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] Heresy_generator@kbin.social 52 points 1 year ago* (last edited 1 year ago) (7 children)

ANNs like this will always just present our own biases and stereotypes back to us unless the data is scrubbed and curated in a way that no one is going to spend the resources to. Things like this are a good demonstration of why they need to be kept far, far away from decision making processes.

[–] megopie@beehaw.org 13 points 1 year ago

And even if moderated, it will display new unique biases, as otherwise unassuming things will get moderated out of the pool by people who take exception to it.

Not to mention the absurd and inhuman mental toll this work will take on the exploited workers forced to sort it.

Like, this is all such a waist of time, effort, and human sanity, for tools of marginal use that are mostly just a gimmick to prop up the numbers for tech bros who have borrowed more money than they can pay back.

[–] tesseract@beehaw.org 13 points 1 year ago

Of course they will be used for decision making processes. And when you complain, they will neglect you saying that the 'computer' said so. The notion that the computer is infallible existed even before LLMs became mainstream.

[–] Lowbird@beehaw.org 11 points 1 year ago

Also, it's the type of thing that makes me very worried about the fact that most of the algorithms used in things like police facial recognition software, recidivism calculation software, and suchlike are proprietary black boxes.

There are - guaranteed - biases in those tools, whether in their processors or in the unknown datasets they're trained on, and neither police nor journalists can actually see the inner workings of the software to know what those biases are, to counterbalance them or to recognize if the software is so biased as to be useless.

[–] Greg@lemmy.ca 8 points 1 year ago

This isn't an Large Language Model, it's an Image Generative Model. And given that these models just present human's biases and stereotypes, then doesn't it follow that humans should also be kept far away from decision making processes?

The problem isn't the tool, it's the lack of auditable accountability. We should have auditable accountability in all of our important decision making systems, no matter if it's a biased machine or biased human making the decision.

This was a shitty implementation of a tool.

[–] HappyMeatbag@beehaw.org 3 points 1 year ago (1 children)

Something as simple and obvious as this makes me wonder what other hidden biases are just waiting to be discovered.

[–] hh93@lemm.ee 9 points 1 year ago

I think the best example about how AI will only further a bias that's already there is the one when Amazon used AI to weed out applications by training an ai with which applications resulted in hired people and which failed - eventually they found that they almost only had interviews with men and upon closer inspection identified that they already were subconsciously discriminating against women earlier but at least HR sent them an equal amount of men and women to the interviews which now wasn't the case anymore since the AI didn't see the value in sending the women to interviews if most of them wouldn't be hired anyway.

[–] jarfil@beehaw.org 2 points 1 year ago* (last edited 1 year ago)

Things like this are a good demonstration of why they need to be kept far, far away from decision making processes.

Somewhat ironic to say, on a platform that's already using ANNs as a first line of defense against users spamming CSAM.

I have no delusions regarding decision makers using them, my only doubt is for how long they've been using them to decide the next step in wars around the world.

[–] EthicalAI@beehaw.org 2 points 1 year ago

I mean, maybe we can make an Ai that uses reason to uncover these biases in the future from this starting point. We are only at the beginning.