TinyTimmyTokyo

joined 1 year ago
[–] TinyTimmyTokyo@awful.systems 8 points 1 month ago

As anyone who's been paying attention already knows, LLMs are merely mimics that provide the "illusion of understanding".

[–] TinyTimmyTokyo@awful.systems 7 points 4 months ago (1 children)

As a longtime listener to Tech Won't Save Us, I was pleasantly surprised by my phone's notification about this week's episode. David was charming and interesting in equal measure. I mostly knew Jack Dorsey as the absentee CEO of Twitter who let the site stagnate under his watch, but there were a lot of little details about his moderation-phobia and fash-adjacency that I wasn't aware of.

By the way, I highly recommend the podcast to the TechTakes crowd. They cover many of the same topics from a similar perspective.

[–] TinyTimmyTokyo@awful.systems 8 points 4 months ago (1 children)

For me it gives off huge Dr. Evil vibes.

If you ever get tired of searching for pics, you could always go the lazy route and fall back on AI-generated images. But then you'd have to accept the reality that in few years your posts would have the analog of a geocities webring stamped on them.

[–] TinyTimmyTokyo@awful.systems 8 points 4 months ago

Please touch grass.

[–] TinyTimmyTokyo@awful.systems 9 points 4 months ago

The next AI winter can't come too soon. They're spinning up coal-fired power plants to supply the energy required to build these LLMs.

[–] TinyTimmyTokyo@awful.systems 5 points 7 months ago (1 children)

I've been using DigitalOcean for years as a personal VPS box, and I've had no complaints. Not sure how well they'd scale (in terms of cost) for a site like this.

[–] TinyTimmyTokyo@awful.systems 10 points 7 months ago (5 children)

Anthropic's Claude confidently and incorrectly diagnoses brain cancer based on an MRI.

[–] TinyTimmyTokyo@awful.systems 12 points 8 months ago (6 children)

Strange man posts strange thing.

[–] TinyTimmyTokyo@awful.systems 12 points 8 months ago

This linked interview of Brian Merchant by Adam Conover is great. I highly recommend watching the whole thing.

For example, here is Adam, decribing the actual reasons why striking writers were concerned about AI, followed by Brian explaining how Sam Altman et al hype up the existential risk they themselves claim to be creating, just so they can sell themselves as the solution. Lots of really edifying stuff in this interview.

[–] TinyTimmyTokyo@awful.systems 9 points 8 months ago (1 children)

She really is insufferable. If you've ever listened to her Pivot podcast (do not advise), you'll be confronted by the superficiality and banality of her hot takes. Of couse this assumes you're able to penetrate the word salad she regularly uses to convey any point she's trying to make. She is not a good verbal communicator.

Her co-host, "Professor" [*] Scott Galloway, isn't much better. While more verbally articulate, his dick joke-laden takes are often even more insufferable than Swisher's. I'm pretty sure Kara sourced from him her opinion that you should "use AI or be run over by progress"; it's one of his most frequent hot takes. He's also one of the biggest tech hype maniacs, so of course he's bought a ticket on the AI hype express. Before the latest AI boom, he was a crypto booster, although he's totally memory-holed that phase of his life now that the crypto hype train has run off a cliff.

[*] I put professor in quotes, because he's one of those people who insist on using a title that is equal parts misleading and pretentious. He doesn't have a doctorate in anything, and while he's technically employed by NYU's business school, he's a non-tenured "clinical professor", which is pretty much the same as an adjunct. Nothing against adjunct professors, but most adjuncts I've known don't go around insisting that you call them "professor" in every social interaction. It's kind of like when Ph.D.s insist you call them "doctor".

[–] TinyTimmyTokyo@awful.systems 10 points 8 months ago

I wonder what percentage of fraudulent AI-generated papers would be discovered simply by searching for sentences that begin with "Certainly, ..."

 

The New Yorker has a piece on the Bay Area AI doomer and e/acc scenes.

Excerpts:

[Katja] Grace used to work for Eliezer Yudkowsky, a bearded guy with a fedora, a petulant demeanor, and a p(doom) of ninety-nine per cent. Raised in Chicago as an Orthodox Jew, he dropped out of school after eighth grade, taught himself calculus and atheism, started blogging, and, in the early two-thousands, made his way to the Bay Area. His best-known works include “Harry Potter and the Methods of Rationality,” a piece of fan fiction running to more than six hundred thousand words, and “The Sequences,” a gargantuan series of essays about how to sharpen one’s thinking.

[...]

A guest brought up Scott Alexander, one of the scene’s microcelebrities, who is often invoked mononymically. “I assume you read Scott’s post yesterday?” the guest asked [Katja] Grace, referring to an essay about “major AI safety advances,” among other things. “He was truly in top form.”

Grace looked sheepish. “Scott and I are dating,” she said—intermittently, nonexclusively—“but that doesn’t mean I always remember to read his stuff.”

[...]

“The same people cycle between selling AGI utopia and doom,” Timnit Gebru, a former Google computer scientist and now a critic of the industry, told me. “They are all endowed and funded by the tech billionaires who build all the systems we’re supposed to be worried about making us extinct.”

 

In her sentencing submission to the judge in the FTX trial, Barbara Fried argues that her son is just a misunderstood altruist, who doesn't deserve to go to prison for very long.

Excerpt:

One day, when he was about twelve, he popped out of his room to ask me a question about an argument made by Derik Parfit, a well-known moral philosopher. As it happens, | am quite familiar with the academic literature Parfi’s article is a part of, having written extensively on related questions myself. His question revealed a depth of understanding and critical thinking that is not all that common even among people who think about these issues for a living. ‘What on earth are you reading?” I asked. The answer, it turned out, was he was working his way through the vast literature on utiitarianism, a strain of moral philosophy that argues that each of us has a strong ethical obligation to live so as to alleviate the suffering of those less fortunate than ourselves. The premises of utilitarianism obviously resonated strongly with what Sam had already come to believe on his own, but gave him a more systematic way to think about the problem and connected him to an online community of like-minded people deeply engaged in the same intellectual and moral journey.

Yeah, that "online community" we all know and love.

[–] TinyTimmyTokyo@awful.systems 31 points 9 months ago (2 children)

Eats the same bland meal every day of his life. Takes an ungodly number of pills every morning. Uses his son as his own personal blood boy. Has given himself a physical appearance that can only be described as "uncanny valley".

I'll never understand the extremes some of these tech bros will go to deny the inevitability of death.

 

Non-paywalled link: https://archive.ph/9Hihf

In his latest NYT column, Ezra Klein identifies the neoreactionary philosophy at the core of Marc Andreessen's recent excrescence on so-called "techno-optimism". It wasn't exactly a difficult analysis, given the way Andreessen outright lists a gaggle of neoreactionaries as the inspiration for his screed.

But when Andreessen included "existential risk" and transhumanism on his list of enemy ideas, I'm sure the rationalists and EAs were feeling at least a little bit offended. Klein, as the founder of Vox media and Vox's EA-promoting "Future Perfect" vertical, was probably among those who felt targeted. He has certainly bought into the rationalist AI doomer bullshit, so you know where he stands.

So have at at, Marc and Ezra. Fight. And maybe take each other out.

 

Representative take:

If you ask Stable Diffusion for a picture of a cat it always seems to produce images of healthy looking domestic cats. For the prompt "cat" to be unbiased Stable Diffusion would need to occasionally generate images of dead white tigers since this would also fit under the label of "cat".

 

Excerpt:

Richard Hanania, a visiting scholar at the University of Texas, used the pen name “Richard Hoste” in the early 2010s to write articles where he identified himself as a “race realist.” He expressed support for eugenics and the forced sterilization of “low IQ” people, who he argued were most often Black. He opposed “miscegenation” and “race-mixing.” And once, while arguing that Black people cannot govern themselves, he cited the neo-Nazi author of “The Turner Diaries,” the infamous novel that celebrates a future race war.

He's also a big eugenics supporter:

“There doesn’t seem to be a way to deal with low IQ breeding that doesn’t include coercion,” he wrote in a 2010 article for AlternativeRight .com. “Perhaps charities could be formed which paid those in the 70-85 range to be sterilized, but what to do with those below 70 who legally can’t even give consent and have a higher birthrate than the general population? In the same way we lock up criminals and the mentally ill in the interests of society at large, one could argue that we could on the exact same principle sterilize those who are bound to harm future generations through giving birth.”

(Reminds me a lot of the things Scott Siskind has written in the past.)

Some people who have been friendly with Hanania:

  • Mark Andreessen, Silion Valley VC and co-founder of Andreessen-Horowitz
  • Hamish McKenzie, CEO of Substack
  • Elon Musk, Chief Enshittification Officer of Tesla and Twitter
  • Tyler Cowen, libertarian econ blogger and George Mason University prof
  • J.D. Vance, US Senator from Ohio
  • Steve Sailer, race (pseudo)science promoter and all-around bigot
  • Amy Wax, racist law professor at UPenn.
  • Christopher Rufo, right-wing agitator and architect of many of Florida governor Ron DeSantis's culture war efforts
 

Ugh.

But even if some of Yudkowsky’s allies don’t entirely buy his regular predictions of AI doom, they argue his motives are altruistic and that for all his hyperbole, he’s worth hearing out.

view more: next ›