this post was submitted on 22 Feb 2024
282 points (87.7% liked)

Lemmy Shitpost

26804 readers
2861 users here now

Welcome to Lemmy Shitpost. Here you can shitpost to your hearts content.

Anything and everything goes. Memes, Jokes, Vents and Banter. Though we still have to comply with lemmy.world instance rules. So behave!


Rules:

1. Be Respectful


Refrain from using harmful language pertaining to a protected characteristic: e.g. race, gender, sexuality, disability or religion.

Refrain from being argumentative when responding or commenting to posts/replies. Personal attacks are not welcome here.

...


2. No Illegal Content


Content that violates the law. Any post/comment found to be in breach of common law will be removed and given to the authorities if required.

That means:

-No promoting violence/threats against any individuals

-No CSA content or Revenge Porn

-No sharing private/personal information (Doxxing)

...


3. No Spam


Posting the same post, no matter the intent is against the rules.

-If you have posted content, please refrain from re-posting said content within this community.

-Do not spam posts with intent to harass, annoy, bully, advertise, scam or harm this community.

-No posting Scams/Advertisements/Phishing Links/IP Grabbers

-No Bots, Bots will be banned from the community.

...


4. No Porn/ExplicitContent


-Do not post explicit content. Lemmy.World is not the instance for NSFW content.

-Do not post Gore or Shock Content.

...


5. No Enciting Harassment,Brigading, Doxxing or Witch Hunts


-Do not Brigade other Communities

-No calls to action against other communities/users within Lemmy or outside of Lemmy.

-No Witch Hunts against users/communities.

-No content that harasses members within or outside of the community.

...


6. NSFW should be behind NSFW tags.


-Content that is NSFW should be behind NSFW tags.

-Content that might be distressing should be kept behind NSFW tags.

...

If you see content that is a breach of the rules, please flag and report the comment and a moderator will take action where they can.


Also check out:

Partnered Communities:

1.Memes

2.Lemmy Review

3.Mildly Infuriating

4.Lemmy Be Wholesome

5.No Stupid Questions

6.You Should Know

7.Comedy Heaven

8.Credible Defense

9.Ten Forward

10.LinuxMemes (Linux themed memes)


Reach out to

All communities included on the sidebar are to be made in compliance with the instance rules. Striker

founded 1 year ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] Flumpkin@slrpnk.net 36 points 8 months ago (4 children)

They are experimenting and tuning. Apparently without any correction there is significant racist bias. Basically the AI reflects the long term racial bias in the training data. According to this BBC article it was an attempt to correct this bias but went a bit overboard.

[–] ApathyTree@lemmy.dbzer0.com 18 points 8 months ago (1 children)

Significant racist bias is an understatement.

I asked a generator to make me a “queen monkey in a purple gown sitting on a throne” and I got maybe two pictures of actual monkeys. I even tried rewording it several times to be a real monkey, described the hair and everything.

The rest were all women of color.

Very disturbing. Pretty ladies, but very racist.

[–] Ottomateeverything@lemmy.world 9 points 8 months ago (2 children)

Apparently without any correction there is significant racist bias.

This doesn't make it any less ridiculous. This is a central pillar of this kind of AI tech, and they're trying to shove a band aid over the most obvious example of it. Clearly, that doesn't work. It's also only even attempting to fix one of the "problems" - they're never going to be able to "band aid" every single place where the AI exhibits this problem, so it's going to leave thousands of others un-fixed. Even if their band aid works, it only continues to mask the shortcomings of this tech and makes it less obvious to people that it's horrendously inacurrate with the other things it does.

Basically the AI reflects the long term racial bias in the training data. According to this BBC article it was an attempt to correct this bias but went a bit overboard.

Exactly. This is a core failing of LLM tech. It's just going to repeat all the shit it was fed to it. You're never going to fix that. You can attempt to steer it in different directions, but the reason this tech was used was because it is otherwise impossible for us to trudge through all the info that was fed to it. This was the only way to get it to "understand" everything. But all of it's understandings are going to have these biases, and it's going to be just as impossible to run through and fix all of these. It's like you didn't have enough metal to build the titanic so you just built it out of Swiss cheese and are trying to duct tape one hole closed so it doesn't sink. It's just never going to work.

This being pushed as some artificial INTELLIGENCE is the problem here. This shit doesn't understand what it's doing, it's just regurgitating the things it's consumed. It's going to be exactly as flawed as whatever was put into it, and you can't change that. The internet media it was trained on is racist, biased, full of undeniably false information, and massively swayed by propaganda on all sides of the fence. You can't expect LLMs to do anything different when trained on that data. They're going to have all the same problems. Asking these things to give you any information is like asking the average internet user what the answer is. And the average internet user is not very intelligent.

These are just amped up chat bots with data being sourced from random bits of the internet. Calling them artificial INTELLIGENCE misleads people into thinking these bots are smart of have some sort of understanding of what they're doing. They don't. They're just fucking internet parrots, and they don't have the architecture to be "fixed" from having these problems. Trying to patch these problems out is a fools errand and only masks their underlying failings.

[–] Flumpkin@slrpnk.net 1 points 8 months ago* (last edited 8 months ago) (1 children)

Would it be possible to create a kind of "formula" to express the abstract relationship of ethical makeup, location, year and field? Like convert a table of population, country, ethnicity mix per year and then train the model on that. It's clear that it doesn't understand the meaning or abstract concept, but it can associate and extrapolate things. So it could "interpret" what the image description says while training and then use the prompt better. So if you'd prompt "english queen 1700" it would output white queen, if you input year 2087 it would be ever so slightly less pasty.

[–] Ottomateeverything@lemmy.world -1 points 8 months ago (1 children)

I don't know, maybe that would work, for this one particular problem. My point is it's more than that. Even if you go through the trouble of fixing this one particular issue with LLMs, there are literally thousands of other problems to solve before it's all "fixed". At some point, when you've built and maintained thousands of workarounds, they start conflicting with each other and making a giant spider web of issues to juggle.

And so you're right back at the problem that you were trying to solve by building the LLM in the first place. This approach is just futile and nonsensical.

[–] Flumpkin@slrpnk.net 2 points 8 months ago* (last edited 8 months ago) (2 children)

Yeah. But maybe this is how you teach an AI a broader understanding of the real world. Or really a slightly less narrow view. Human brains also have to learn and reconcile all these conflicting data points and then create a kind of understanding from it. For any machine learning it would only be an intuitive instinct.

Like you would have a bunch of these "tables" that show relationships between various tokens and embody concepts. Maybe you need to combine different kind of models that are organized and trained differently to resolve such things. I only have a very surface level understanding of how machine learning works so I know this is very speculative. Maybe you're right and it can only ever reflect the training data. Then maybe you'd need to edit the training data, but you could also maybe use other AIs to "reinterpret" training data based on other models.

Like all the data on reddit, could you train a model to detect sarcasm or lies or to differentiate between liberal, leftist and fascist type of arguments? Not just recognizing the tokens or talking points, but the semantic of an argument? Like detecting a non sequitur. You probably need need "general knowledge" understanding for that. But any kind of AI like that would be incredibly interesting for social media so you client can tag certain posts, or root out bot / shill networks that work for special interests (fossil fuel, usa, china, russia).

So all the stuff "conflicting with each other and making a giant spider web of issues to juggle" might be what you can train an AI to pull apart into "appeal emotion" and "materialistic view" or "belief in inequality" or "preemptive bias counteractor". Maybe it actually could extract and help us communicate better.

Eh I really need to learn more about AI to understand the limits.

[–] Ookami38@sh.itjust.works 0 points 8 months ago (2 children)

The broad answer is, I'm pretty sure everything you've mentioned is possible, and you're right in that this is similar to how humans integrate new data. Everything we learn competes with and bolsters every bit of knowledge we already have, so our web of understanding is this ever shifting net of relationships between concepts.

I don't see any reason these kinds of relationships can't be integrated into generative AI, they just HAVEN'T yet, and each time you increase how the relationships interact, you're also drastically increasing the size and complexity of the algorithm and model. I think we're just realizing that what we have now is OK, but needs to be significantly better before it's really mind blowing.

[–] Flumpkin@slrpnk.net 0 points 8 months ago

Yeah, I imagine generative AI as like one small part of a human mind, so we'd need to create a whole lot more for AGI. But it's shocking (at least for me) that it works at all just through more data and compute power. That you can make qualitative leaps with just increasing the quantity. Maybe we'll see more progress now.

[–] Ottomateeverything@lemmy.world -1 points 8 months ago (1 children)

I don't see any reason these kinds of relationships can't be integrated into generative AI, they just HAVEN'T yet

No, it's just fucking pointless. You're talking about adding sand to a beach. These things are way more complicated and trying to shovel these things in just makes a mess. See literally the OP.

each time you increase how the relationships interact, you're also drastically increasing the size and complexity of the algorithm and model.

No youre not. Not even fucking close. You clearly don't understand this at all.

The ALGORITHM will always be the same. Except for new generations of these bots. Claiming adding things like racial bias is going to alter the algorithm is just nonsensical.

The MODEL is the huge fucking corpus of internet data. Anything you tack onto it is a drop in an ocean. It's not steering anything.

Whats changing is they're editing inputs because that's all you can really do to shift where these things go. Other changes would turn this into a very different beast, and can't be done at the fine grained level like "race".

Claiming this has any significant impact on the size or complexity of any of this is just total hog wash and you must not understand how these work or how big they are.

[–] Ookami38@sh.itjust.works -1 points 8 months ago (1 children)

In what world does changing the algorithm used in order to generate anything, something that would be NECESSARY to make the model incorporate a new dimension of data, not change the algorithm used to generate?

I'm not just talking adding more prompts, keying more specific terms to specific patterns of pixels, I'm talking building in entirely new ways for the AI to understand.

You seem to think I'm just talking about linearly expanding the vocabulary of the model, I'm talking about giving it an entirely new paradigm through which to work.

Anyway, this is why no one likes pedants. If you want to actually engage in conversation, sure. If you want to just keep being a vitriolic ass, go back to your cave, yeah?

[–] Ottomateeverything@lemmy.world -1 points 8 months ago

You seem to think I'm just talking about linearly expanding the vocabulary of the model, I'm talking about giving it an entirely new paradigm through which to work.

No, I don't. I know exactly what you're trying to say. But you're basically talking about trying to make a car fly. That's not how it was built and it's goals and foundations are entirely different. You're better off starting over and building a plane. Your proposal just doesn't fit within the paradigms of what was built and makes no sense.

I'm talking building in entirely new ways for the AI to understand.

Exactly. But the AI doesn't "understand" anything. In order to achieve this, you need to build something that "understands" things. LLMs don't understand anything.

Anyway, this is why no one likes pedants. If you want to actually engage in conversation, sure.

It's easy to label me as a pendant, but I'm explaining how this stuff works. You clearly have no idea, admitted yourself that you don't understand, and then keep going. You just keep spewing the same shit, but the shit you're spewing makes no sense. But you refuse to budge or engage in conversation here.

You're just talking out of your ass. You're admittedly uneducated but want to be treated like you're educated and make any sense. You don't. This is why people hate people pretending to be experts and talking about things they don't understand. It's a waste of time.

If you want to keep living in some imaginary world where this can be done, be my guest, but it's fake. That's not how this shit works. Enjoy your imaginary quest though.

[–] Ottomateeverything@lemmy.world -1 points 8 months ago* (last edited 8 months ago) (1 children)

You're just rephrasing the same approach, over, and over, and over. It's like you're not even reading what I'm saying.

The answer is no. This is not a feasible approach. LLMs are just parrots and they don't understand anything. They were essentially a "shortcut" that gets something that acts intelligent without actually having to build something intelligent. You're not going to convince it to be intelligent. You're not going to solve all it's short comings by shoe horning something in. It's just more work than building actual intelligence.

It's like if a costal town got overrun by flooding from a hurricane. And some guy shows up and is like "hey, I've got a bucket, I'll just pull all the water to the sea". And I'm like "that's infeasible, we need a different solution, your bucket even has fucking holes in it". And you're over here saying "well, what if we got some duct tape? And then we can patch the holes. And then we can call our friends, and we can all bucket the water".

It's just not happening.

Eh I really need to learn more about AI to understand the limits

Yeah. This. You just keep repeating the same approach over and over without understanding or listening to the basic failings of these chat bots. It's just not happening. You're just perpetuating nonsense.

These things are basically slightly more complicated versions of the auto complete in your phone keyboard. Except that they're fed hug amounts of the internet. They get really good at parroting sentences, but they have no sense of "intelligence" or what they're actually doing. You're better off trying to convince your auto correct to sound like Shakespeare than you are to remove the failings like racial bias from things like Gemini and ChatGPT. You can chip at small corners here and there but this is just not the path forward.

[–] Flumpkin@slrpnk.net 0 points 8 months ago

You’re just rephrasing the same approach, over, and over, and over. It’s like you’re not even reading what I’m saying.

No I read what you are saying. I just think that you are something that "acts intelligent without actually being intelligent". Here is why: All that you've written is based on very simple primitive brain cells and synapses and synaptic connections. It's self evident that this is not really something that is designed to be intelligent. You're just "really good at parroting sentences". And you clearly agree that I'm doing the same 😄

Clearly LLMs are not intelligent and don't understand, and it would need many other systems to make them so. But what they do show is that the "creative spark" even though they are very mediocre in their quality, can be created by using a critical mass of quantity. It's like it's just one small part of our mind, the "creative writing center" without intelligence. But it's there, just because we added more data and processing.

Quality through quantity, that is what we seem to be and what is so shocking. And it's obvious that there is a kind of disgust or bias against such a notion. A kind of embarrassment of the brain to just be thinking meat.

Now you might be absolutely right that my specific suggestion for an approach is bullshit, I don't know enough about it. But I am pretty sure we'll get there without understanding exactly how it works.

[–] KeenFlame@feddit.nu 0 points 8 months ago (1 children)

None of this has been pushed, by any researcher, by any company, by any open source group even, as "intelligence" In fact, it was unanimously disliked as a term by everyone working with the models and transformers, but media circus combined with techbros laymen hard on hype have won. Since then everyone has given up trying to be semantically correct on this front.

[–] Ottomateeverything@lemmy.world -2 points 8 months ago (2 children)

I didn't say any researcher or anything had named it intelligence. Nor am I trying to be semantically correct.

Read the guys comments. He's trying to push the idea that we can "change" it's "understanding" about the things it's discussing. He is one of the people who has fallen for the tech bros etc convincing people it is intelligent. I'm not fighting semantics, I'm trying to explain to him that it's not intelligent. Because he himself clearly doesn't understand that.

[–] Ookami38@sh.itjust.works -1 points 8 months ago

Why do you seem to think it's impossible to change how AI understands things? It's just an algorithm. It's just a fancy set of math functions that gets you from noise to something that looks like something. Of COURSE we can change how this process works, have it weigh other things, and get something that generates based on a different paradigm than we currently have. All you seem to try to do is be semantically correct.

[–] KeenFlame@feddit.nu -2 points 8 months ago

That's just silly, as if there is no nuance whatsoever. You can ofc change its understanding. Depending on your definition, different types of models could be interpreted as intelligent in certain areas. You can be rational, you know, not everything needs to be black and white. It's also possible that since even the experts in the field don't fully grasp it, maybe you don't either.

[–] Kusimulkku@lemm.ee 9 points 8 months ago* (last edited 8 months ago) (1 children)

For example, a prompt seeking images of America's founding fathers turned up women and people of colour.

"A bit" overboard yeah

[–] KeenFlame@feddit.nu 0 points 8 months ago

To the machine, the query is "draw the founding fathers but diversely" it's not the data that is corrupt, the usage is, clearly the system prompt in this case

[–] explodicle@local106.com 6 points 8 months ago (1 children)

We all expected the AIs to launch nukes, and they simply held up a mirror.

[–] Flumpkin@slrpnk.net 4 points 8 months ago (1 children)
[–] DragonTypeWyvern@literature.cafe 1 points 8 months ago

That guy sucks, and ruined my life!