self

joined 2 years ago
MODERATOR OF
[–] self@awful.systems 2 points 9 months ago

fucking fuck

[–] self@awful.systems 11 points 9 months ago (1 children)

I sent this article to a friend and their response to the thumbnail was “dear fuck why” because their eyes have forgotten what good AI art looks like

[–] self@awful.systems 11 points 9 months ago (5 children)

404 media revisited the worthless DeepMind materials science dataset, featuring some world-class marketing gymnastics:

Google DeepMind told me in a statement, “We stand by all claims made in Google DeepMind’s GNoME paper.”

“Our GNoME research represents orders of magnitude more candidate materials than were previously known to science, and hundreds of the materials we’ve predicted have already been independently synthesized by scientists around the world,” it added.

[…]

Google said that some of the criticisms in the Chemical Materials analysis, like the fact that many of the new materials have already known structures but use different elements, were done by DeepMind by design.

hundreds of the materials have already been independently synthesized you say?

”We spent quite a lot of time on this going through a very small subset of the things that they propose and we realize not only was there no functionality, but most of them might be credible, but they’re not very novel because they’re simple derivatives of things that are already known.”

this just in, DeepMind’s output is worthless by design. but about that credibility point…

“In the DeepMind paper there are many examples of predicted materials that are clearly nonsensical. Not only to subject experts, but most high school students could say that compounds like H2O11 (which is a Deepmind prediction) do not look right,” Palgrave told me.

by far the most depressing part of this article is that all of the scientists involved go to some lengths to defend this bullshit — every criticism is hedged with a “but we don’t hate AI and Google’s technology is still probably revolutionary, we swear!” and I don’t know if that’s due to AI companies attributing the successes of machine learning in research to unrelated LLM and generative AI tech (a form of reputation laundering they do constantly) or because the scientists in question are afraid of getting their lab’s cloud compute credits yanked if they’re too critical. knowing the banality of the technofascist evil in play at Google, it’s probably both.

[–] self@awful.systems 6 points 9 months ago

that’s a great idea! I’m hoping I can grab enough dev time this week to get our fork in better shape

[–] self@awful.systems 10 points 9 months ago (3 children)

they had exactly one (relatively upvoted) post here in the last 4 months before they decided to use the reports system as… a weird one-way DM? an attempt to get me pulled into an urgent meeting about my attitude (good fucking luck)? like even if they were hoping someone else would read it, the fuck kind of result did they expect from “not very mature tbh”?

also, a “posts to this instance only” view on profiles is a brilliant feature idea as a moderation tool

[–] self@awful.systems 10 points 9 months ago (2 children)

when I asked BasedBeffJezos what I should expect from an e/a future, he replied “welcome to the fantasy zone. get ready !”. it was only then that I realized he had cleverly tricked me into interviewing a standup cabinet of the 1985 Sega arcade hit Space Harrier

[–] self@awful.systems 15 points 9 months ago

in order to solve the Traveling Salesman Problem, the first step is to use a machine model to confirm the user isn’t a salesman

[–] self@awful.systems 14 points 9 months ago (7 children)

welcome! this is a good first post.

"Ever since I was a kid, I wanted to figure out a theory of everything, to understand the universe."

It turned out the real Beff Jezos was a brilliant Quantum AI computing scientist.

He's only in his early 30s, but he'd already held leadership roles at two cutting-edge companies owned by Google's parent company, Alphabet.

and I can see why you needed to sneer. the entire fucking article quirkwashes e/a and BasedBeffJezos by sharing some of the absolute stupidest opinions and memes ever formed (an e/a staple) and a small fraction of the bigotry in their community, and handwaves it all away by claiming BasedBeffJezos must be a genius cause he was a nepo hire at two google subsidiaries and was a “Quantum AI computing scientist”, whatever the fuck that means

Despite the apparent war, e/accs and doomers have a surprising amount in common.

wow, it’s almost like your ass forgot to interview anyone outside of an AI cult for this article

[–] self@awful.systems 15 points 9 months ago (5 children)

a note to our federated guests: nothing about this move was smart

[–] self@awful.systems 15 points 9 months ago (6 children)

shut the fuck up

[–] self@awful.systems 13 points 9 months ago

that would be a key part of his job description, yes

[–] self@awful.systems 26 points 9 months ago (24 children)

Speaking at an event in London on Tuesday, Meta’s chief AI scientist Yann LeCun said that current AI systems “produce one word after the other really without thinking and planning”.

Because they struggle to deal with complex questions or retain information for a long period, they still “make stupid mistakes”, he said.

Adding reasoning would mean that an AI model “searches over possible answers”, “plans the sequence of actions” and builds a “mental model of what the effect of [its] actions are going to be”, he said.

wait, you mean the same models that supposed AI researchers were swearing had “glimmerings of intelligent reasoning” and “a complex world model” really were just outputting the most likely next word for a prompt? the current models are just fancy autocomplete but now that there’s a new product to sell, that one will be the real thing? and of course, the new models are getting pre-announced as revolutionary as interest in this horseshit in general takes a nosedive.

LeCun said it was working on AI “agents” that could, for instance, plan and book each step of a journey, from someone’s office in Paris to another in New York, including getting to the airport.

these must be the multi-agent models that AI fans won’t shut the fuck up about now that multi-modal LLMs are here and disappointing. is it just me or does the use case for this sound fucking stupid? like, there’s apps that do this already. this shit was solved already by application of the least-terrible surviving algorithms from the first AI boom. what the fuck is the point of re-solving travel planning, but now incredibly expensive and you can’t trust the results?

view more: ‹ prev next ›