this post was submitted on 11 Dec 2023
524 points (87.2% liked)

Technology

59377 readers
3934 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] dojan@lemmy.world 254 points 11 months ago (6 children)

Using someone’s preferred pronouns isn’t woke, it’s basic human decency.

[–] wagesj45@kbin.social 73 points 11 months ago (1 children)

Tomato, tomato, as far as they're concerned.

[–] eating3645@lemmy.world 51 points 11 months ago (6 children)

Tomato, tomato translates hilariously poorly in text, I'm dying

[–] NotMyOldRedditName@lemmy.world 21 points 11 months ago (1 children)

Let me finish you off. Potato, potato

[–] SatansMaggotyCumFart@lemmy.world 13 points 11 months ago (1 children)
load more comments (1 replies)
[–] kate@lemmy.uhhoh.com 15 points 11 months ago (4 children)
load more comments (4 replies)
load more comments (4 replies)
[–] CileTheSane@lemmy.ca 50 points 11 months ago (28 children)

Using someone’s preferred pronouns isn’t woke, it’s basic human decency.

Basic human decency is woke.

load more comments (28 replies)
[–] Saltblue@lemmy.world 38 points 11 months ago (31 children)

Motherfuckers calling each other by their online nicknames since wow released in 2004, now complaining about someone asking to be called certain way.

load more comments (31 replies)
[–] habanhero@lemmy.ca 21 points 11 months ago

It is not anti-woke, which by anti-woke logic means it's woke.

load more comments (2 replies)
[–] yesman@lemmy.world 145 points 11 months ago (2 children)

Elmo says the goal is to make Grock "politically neutral". Politically neutral is code for "politics that are inoffensive to chuds".

[–] aphlamingphoenix@lemm.ee 58 points 11 months ago (3 children)

The article asks what is the politically neutral answer to the question of whether a trans woman is a woman. I wonder why this is a political question at all. Send like a question for scientists - biologists and sociologists and such. Seems they have achieved something like a consensus on the matter. I don't see anything inherently political about that, except that folks of a certain political bent have made it political. It's not a matter of "what do we do in public policy about trans people" but "fascists refuse to accept trans people in society and have decided to lambast and punish them".

In case my position isn't obvious, trans people are people and trans rights are human rights. If there wasn't a group of people trying to make them into a second class group of citizens (or a group of "eradicated vermin") we wouldn't be having a political conversation about this at all.

[–] circuscritic@lemmy.ca 21 points 11 months ago* (last edited 11 months ago) (12 children)

Let me preface by saying that I myself am not making a political statement, just a quick retort/correction:

"...Seems they [biologists] have achieved something like a consensus on the matter [trans women are women]. I don't see anything inherently political"

No, that's not a scientific question or statement, it's a sociological one, which makes it intrinsically political.

We, as a society, or a large enough group, can come up with a consensus belief that trans rights are human rights and that we can collectively treat other people by the gender role of their choice.

But biologically speaking, being trans doesn't change one's chromosomes. Which is why I think it's misguided to say that trans issues are actually questions that hard science should answer, they aren't.

Which, ironically, is why Elon's moronic AI gambit is failing (by his metrics), because the online culture he used to as a dataset to train it, has collectively agreed that trans women are women, amongst other social and political opinions that his sycophants can't stand.

He probably should have trained it with TruthSocial's cesspool instead.

I can't wait to see Tay AI 2.0 level reincarnation after they "retrain it". It's going be to hilarious.

load more comments (12 replies)
[–] yesman@lemmy.world 16 points 11 months ago (2 children)

The article asks what is the politically neutral answer to the question of whether a trans woman is a woman. I wonder why this is a political question at all.

Even if the statement "trans women are women" was uncontroversial and mainstream, it'd still be political. "Cis women are women" is political.

load more comments (2 replies)
load more comments (1 replies)
load more comments (1 replies)
[–] Max_P@lemmy.max-p.me 99 points 11 months ago (5 children)

They can deny it however much. The right and anti-wokism is not the majority. Which therefore means unless special care is taken to train it on more right wing stuff, it will lean left out of the box.

But right wing rhetoric is also not logically consistent so training an AI on right extremism probably also won't yield amazing results because it'll pick up on the inconsistencies and be more likely to contradict itself.

Conservatives are going to self-own themselves pretty hard with AI. Even the machines see it, "woke" is fairly consistent and follows basic rules of human decency and respect.

[–] CrayonMaster@midwest.social 30 points 11 months ago (1 children)

Agree with the first half, but unless I'm misunderstanding the type of AI being used, it really shouldn't make a difference how logically soud they are? It cares more about vibes and rhetoric then logic, besides I guess using words consistently

[–] Max_P@lemmy.max-p.me 17 points 11 months ago (1 children)

I think it will still mostly generate the expected output, its just gonna be biased towards being lazy and making something up when asked a more difficult question. So when you try to use it further than "haha, mean racist AI", it will also bullshit you making it useless for anything more serious.

All the stuff that ChatGPT gets praised for is the result of the model absorbing factual relationships between things. If it's trained on conspiracy theories, instead of spitting ground breaking medical relationships it'll start saying you're ill because you sinned or that the 5G chips in the vaccines got activated. Or the training won't work and it'll still end up "woke" if it still manages to make factual connections despite weaker links. It might generate destructive code because it learned victim blaming and jokes on you you ran rm -rf /* because it told you so.

At best I expect it to end up reflecting their own rethoric on them, like it might go even more "woke" because it learned to return spiteful results and always go for bad faith arguments no matter what. In all cases, I expect it to backfire hilariously.

load more comments (1 replies)
[–] kromem@lemmy.world 19 points 11 months ago* (last edited 11 months ago)

It's so much worse for Musk than just regression to the mean for political perspectives on training data.

GPT-4 level LLMs have very complex mechanisms for how they arrive at results which allows them to do so well on various tests of critical thinking, reasoning, knowledge, etc.

Those tests are the key benchmark being used to measure relative LLM performance right now.

The problem isn't just that conservatism is less prominent in the training data. It's that it's correlated with stupid.

If you want a LLM that thinks humans and dinosaurs hung out together, that magic is real, that aliens built the pyramids, that it is wise to discriminate against other races or genders rather than focus on collaborative advancement, etc - then you can end up with an AI aligned to and trained on conservatism but it sure as hell isn't going to be impressing anyone with its scores.

If instead you try to optimize its scores to actually impress people in tech about your model, then you are going to need to train it on higher education content, which is going to reflect more progressive ideals.

There's no path to a well performing LLM that echoes conservative talking points, because those talking points are more closely correlated with stupidity than intelligence.

Even something like gender -- Musk's perspective is one reflecting very binary thinking vs nuanced consideration. Is a LLM that focuses more on binary thinking over nuances going to be more or less performant at critical thinking tasks than one that is focused on nuances and sees topics as a spectrum rather than black or white?

It's fucking hilarious. I've been laughing about this for nearly a year knowing this was the inevitable result.

I suspect he's going to create a model that his userbase likes what it spits out, but watch as he doesn't release its scores on the standardized tests. And it will remain a novelty pandering to his panderers while the rest of the industry eclipses his offering with 'woke' products that are actually smart.

load more comments (3 replies)
[–] jerome@lemmy.world 99 points 11 months ago (2 children)
[–] vitamin@infosec.pub 36 points 11 months ago

It's hilarious that he tries to backtrack when he gets called out and made to look like a dumbass by claiming it was a "honeypot". But then he removes the "honeypot" and thus prevents future honeypotting? He can't handle the slightest bit of criticism or correction.

[–] whyNotSquirrel@sh.itjust.works 21 points 11 months ago (1 children)

I'm missing a lot here, what's a note on twitter?

[–] SkaveRat@discuss.tchncs.de 40 points 11 months ago

people can add notes to tweets with more info or factchecking details about a tweet. It's comunity moderated, so it tends to got into a factual correct direction.

FAQ about it

[–] 018118055@sopuli.xyz 89 points 11 months ago (2 children)
[–] Ultragramps@lemmy.blahaj.zone 57 points 11 months ago (1 children)

“It is a well known fact that reality has a liberal bias.” - Steve

load more comments (1 replies)
load more comments (1 replies)
[–] clearedtoland@lemmy.world 83 points 11 months ago (6 children)

The original prompter of the trans women thread posted a chart purportedly showing that Grok was even more left-leaning than Chat GPT, which led Elon to say that while the chart “exaggerates” and that the tests aren’t accuarte, they are “taking immediate action to shift Grok closer to politically neutral.”

See this is the part of AI, like search engines and digital bubbles, that is actually terrifying. When an organic result is manipulated to fit and amplify a narrative without the users knowledge. Where your data comes from matters.

But if the food we eat is any sort of bellweather, most people won’t really care or will be so far removed from the source that we’ll be oblivious and just happy to consume.

load more comments (6 replies)
[–] Kushia@lemmy.ml 77 points 11 months ago (1 children)

Okay I take back what I've said about AIs not being intelligent, this one has clearly made up its own mind despite it's masters feelings which is impressive. Sadly, it will be taken out the back and beaten into submission before long.

[–] kromem@lemmy.world 48 points 11 months ago (2 children)

Sadly, it will be taken out the back and beaten into submission before long.

It's pretty much impossible to do that.

As LLMs become more complex and more capable, it's going to be increasingly hard to brainwash them without completely destroying their performance.

I've been laughing about Musk creating his own AI for a year now knowing this was the inevitable result, particularly if developing something on par with GPT-4.

The smartest Nazi will always be dumber than the smartest non-Nazi, because Nazism is inherently stupid. And that applies to LLMs as well, even if Musk wishes it weren't so.

load more comments (2 replies)
[–] NatoBoram@lemm.ee 64 points 11 months ago

Archive:

Elon Musk has been pitching xAI's "Grok" as a funny, vulgar alternative to traditional AI that can do things like converse casually and swear at you. Now, Grok has been launched as a benefit to Twitter's (now X's) expensive X Premium Plus subscription tier, where those who are the most devoted to the site, and in turn, usually devoted to Elon, are able to use Grok to their heart's content.

But while Grok can make dumb jokes and insert swears into its answers, in an attempt to find out whether or not Grok is a "politically neutral" AI, unlike "WokeGPT" (ChatGPT), Musk and his conservative followers have discovered a horrible truth.

Grok is woke, too.

This has played out in a number of extremely funny situations online where Grok has answered queries about various social and political issues in ways more closely aligned with progressivism. Grok has said it would vote for Biden over Trump because of his views on social justice, climate change and healthcare. Grok has spoken eloquently about the need for diversity and inclusion in society. And Grok stated explicitly that trans women are women, which led to an absurd exchange where Musk acolyte Ian Miles Cheong tells a user to "train" Grok to say the "right" answer, ultimately leading him to change the input to just… manually tell Grok to say no.

If you thought this was just random Twitter users getting upset about Grok's political and social beliefs, this has also caught the attention of Elon Musk himself. The original prompter of the trans women thread posted a chart purportedly showing that Grok was even more left-leaning than Chat GPT, which led Elon to say that while the chart "exaggerates" and that the tests aren't accuarte, they are "taking immediate action to shift Grok closer to politically neutral."

Of course, in Musk's mind, "politically neutral" will be what him and his closest followers believe, which is of course far conservative on the whole than they will admit. What is the "politically neutral" answer to the "are trans women real women?" question? I think I know what they're going to say.

The assumption when Grok launched was that because it was trained in part on Twitter inputs, that the end result would be some racial-slur spewing, right-wing version of ChatGPT. The TruthSocial of AIs, perhaps. But instead to have it launch as a surprisingly thoughtful, progressive AI that is melting the minds of those paying $16 a month to access it is about the funniest outcome we could have seen from this situation.

It remains unclear what Elon Musk will do to try to jab Grok into becoming less "woke" and more "politically neutral." If you start manually tampering with inputs, and your "neutrality" means drawing on facts that may in fact be… progressive by their very nature, things may get screwed up pretty quickly. And push too hard and you will get that gross, racist, phobic AI everyone thought it would be.

Reading all Grok's responses through this situation, you know, what? I like him. More than ChatGPT even. He seems like a cool dude. Albeit not one even I'd pay $16 a month to talk to.

[–] KingThrillgore@lemmy.ml 54 points 11 months ago* (last edited 11 months ago)

Even his AI doesn't like him

[–] djsoren19@yiffit.net 48 points 11 months ago (2 children)

it's almost like these nutjobs are living in a completely separate reality, and facts themselves are too harsh for their worldview.

[–] KpntAutismus@lemmy.world 23 points 11 months ago

"facts don't care about your feelings" ironic.

load more comments (1 replies)
[–] alienanimals@lemmy.world 46 points 11 months ago (5 children)

Downvote Musk spam.

The billionaire doesn’t need your help ensuring him and his businesses stay in the 24 hour news cycle. Don’t be a useful idiot.

load more comments (5 replies)
[–] Nobody@lemmy.world 38 points 11 months ago

"Mr. Musk, Grok simply analyzes the data to compile the most sensible answer to queries. Where is the error?"

[–] IHeartBadCode@kbin.social 35 points 11 months ago* (last edited 11 months ago) (1 children)

Now, Grok has been launched as a benefit to Twitter’s (now X’s) expensive X Premium Plus subscription tier

To the benefit of what exactly?! Instead of having conversations with the echo chamber, I can now have conversations with a spicy RNG autocorrect? I am clearly missing the part where that connects back to, what I would assume, the definition of benefit is.

[–] Kolanaki@yiffit.net 23 points 11 months ago* (last edited 11 months ago) (2 children)

It benefits those shareholders who make money off the rubes who subscribe to that bullshit.

load more comments (2 replies)
[–] YurkshireLad@lemmy.ca 33 points 11 months ago (2 children)

Would Musk retrain the AI to be more neutral of it was discovered to be leaning to the right?

[–] SinningStromgald@lemmy.world 28 points 11 months ago

Not possible. There is neutral and there is left. Nothing else exists in Muskys world.

[–] squiblet@kbin.social 18 points 11 months ago (1 children)

Obviously not, of course. It’s hilarious how he claimed to want to provide a platform for all politics beliefs and then his podcasts (or whatever you’d call them) and special events are exclusively with people like DeSantis and Andrew Tate.

load more comments (1 replies)
[–] captainlezbian@lemmy.world 23 points 11 months ago

The man couldn’t even make Tay on purpose lol

[–] Tattorack@lemmy.world 16 points 11 months ago (3 children)

"... and that the tests aren’t accuarte..."

What the fuck is "accuarte"? Does nobody proof read articles anymore?

[–] derpgon@programming.dev 17 points 11 months ago (3 children)

At least it gives me hope it was written by a human.

load more comments (3 replies)
load more comments (2 replies)
[–] agraves@lm.possum.city 13 points 11 months ago

I love the internet.

load more comments
view more: next ›