SneerClub

783 readers
21 users here now

Hurling ordure at the TREACLES, especially those closely related to LessWrong.

Posts or links discussing our very good friends should have "NSFW" ticked (Nice Sneers For Winners).

AI-Industrial-Complex grift is fine as long as it sufficiently relates to the AI doom from our very good friends.

This is sneer club, not debate club. Unless it's amusing debate.

[Especially don't debate the race scientists, if any sneak in - we ban and delete them as unsuitable for the server.]

founded 1 year ago
MODERATORS
26
 
 

In which a man disappearing up his own asshole somehow fails to be interesting.

27
28
 
 

content warning: Zack Davis. so of course this is merely the intro to Zack's unquenchable outrage at Yudkowsky using the pronouns that someone wants to be called by

29
 
 

In her sentencing submission to the judge in the FTX trial, Barbara Fried argues that her son is just a misunderstood altruist, who doesn't deserve to go to prison for very long.

Excerpt:

One day, when he was about twelve, he popped out of his room to ask me a question about an argument made by Derik Parfit, a well-known moral philosopher. As it happens, | am quite familiar with the academic literature Parfi’s article is a part of, having written extensively on related questions myself. His question revealed a depth of understanding and critical thinking that is not all that common even among people who think about these issues for a living. ‘What on earth are you reading?” I asked. The answer, it turned out, was he was working his way through the vast literature on utiitarianism, a strain of moral philosophy that argues that each of us has a strong ethical obligation to live so as to alleviate the suffering of those less fortunate than ourselves. The premises of utilitarianism obviously resonated strongly with what Sam had already come to believe on his own, but gave him a more systematic way to think about the problem and connected him to an online community of like-minded people deeply engaged in the same intellectual and moral journey.

Yeah, that "online community" we all know and love.

30
 
 

rootclaim appears to be yet another group of people who, having stumbled upon the idea of the Bayes rule as a good enough alternative to critical thinking, decided to try their luck in becoming a Serious and Important Arbiter of Truth in a Post-Mainstream-Journalism World.

This includes a randiesque challenge that they'll take a $100K bet that you can't prove them wrong on a select group of topics they've done deep dives on, like if the 2020 election was stolen (91% nay) or if covid was man-made and leaked from a lab (89% yay).

Also their methodology yields results like 95% certainty on Usain Bolt never having used PEDs, so it's not entirely surprising that the first person to take their challenge appears to have wiped the floor with them.

Don't worry though, they have taken the results of the debate to heart and according to their postmortem blogpost they learned many important lessons, like how they need to (checks notes) gameplan against the rules of the debate better? What a way to spend 100K... Maybe once you've reached a conclusion using the Sacred Method changing your mind becomes difficult.

I've included the novel-length judges opinions in the links below, where a cursory look indicates they are notably less charitable towards rootclaim's views than their postmortem indicates, pointing at stuff like logical inconsistencies and the inclusion of data that on closer look appear basically irrelevant to the thing they are trying to model probabilities for.

There's also like 18 hours of video of the debate if anyone wants to really get into it, but I'll tap out here.

ssc reddit thread

quantian's short writeup on the birdsite, will post screens in comments

pdf of judge's opinion that isn't quite book length, 27 pages, judge is a microbiologist and immunologist PhD

pdf of other judge's opinion that's 87 pages, judge is an applied mathematician PhD with a background in mathematical virology -- despite the length this is better organized and generally way more readable, if you can spare the time.

rootclaim's post mortem blogpost, includes more links to debate material and judge's opinions.

edit: added additional details to the pdf descriptions.

31
 
 
32
 
 

Of course young optimistic me would have considered that this was an easy thing to have a QA test for, but here we are in 2024 and I am neither young or optimistic. Maybe the AI QA folks were in the last few rounds of Google layoffs or something.

33
 
 

This has convinced me more and more that the only possible way forward that’s not a dystopian hellscape is total freedom of all AI for anyone to do with as they wish. Anything else is forcing values

This dude also posts a direct link to a race-bait bluecheck two comments down, further cementing hn AI threads as downstream frog twitter.

I know this one might be stretching it a bit, but every comment on this post is sneer-worthy, every single one.

34
 
 

Some gems from the article.

... We numbered 50 or so. We came from places like Harvard and Stanford and UChicago and MIT and U Penn. There was James, who studied computer science. Then there was Cameron, who also studied computer science. David and Peter studied computer science, while Luke and Albert studied computer science. As for Mike and Jason, the former studied computer science, whereas the latter studied computer science. Ethan was not unlike Max, in that both studied computer science. Some people studied business, too.

The students’ demographics were as revealing as their chosen majors. Roughly 80% were white. Over 70% were men. There was not a black man in the room.

(And if you need to leave to use the bathroom, you’ll get to pass by a massive oil painting of George W. Bush making the Hand of Benediction in front of the wreckage of 9/11, beside a Madonna-figure whose halo glows, I shit you not, with the Coca Cola logo.)

Peter springs to the center of the room. The air pressure changes. A buzz, a hum, a current about us. He brims with a frenzied energy. Something is happening. He is going to give us a taste of what’s to come, he says. This is the kind of intellectual activity we’re going to experience at UATX. We’re going to grapple with big issues. We’re going to be daring, fearless, undaunted. We’re going, he says, to do something called “Street Epistemology.”

What is Street Epistemology? He’ll demonstrate. It’s one of two things he does, the other being jiu-jitsu. “I don’t have a life,” he says. “I talk to strangers and I wrestle strangers.” But before we can do Street Epistemology, Peter needs to think of some questions.

“You gotta get into jiu-jitsu, man. I’m telling you.” Peter did jiu-jitsu. It’d changed his life. He spun around in his seat, scanned the rest of the bus, then whipped back to laser his eyes on me. “I could murder everybody on this bus and nobody could stop me. It’s a superpower.” I thought this over.

Many of the founders had participated in the same conservative think tanks: The Hoover Institution, The Manhattan Institute, The American Enterprise Institute. Many had contributed to The Free Press, the digital paper founded by Bari Weiss in 2021, the same year UATX was announced. Many were friends or fans of Jordan Peterson. One UATX founder was even double-dipping, delivering lectures at both UATX and Peterson’s forthcoming Peterson Academy. One had been fired from Princeton University after sleeping with a student and “discouraging her from seeking mental health care,” per an official university statement. One had been accused of assaulting his girlfriend. (The charges were dropped.) Another had had a talk at MIT canceled after comparing Affirmative Action to “the atrocities of the 20th century.” And so, beneath their optimism, there churned bitterness and indignation at their mistreatment by the Thought Police—sour feelings they sweetened with their commitment to “free and open inquiry.”

35
 
 

The one promised in this post several months ago.

@collectivist spotted the finished product was out:

When he posted the finished video on youtube yesterday, there were some quite critical comments on youtube, the EA forum and even lesswrong. Unfortunately they got little to no upvotes while the video itself got enough karma to still be on the frontpage on both forums.

YouTube; LessWrong; EA Forum

the video is everything you'd expect. The power of classical liberalism and technology segues into uwu libertarianism. I made it about three minutes with a great deal of skipping.

36
 
 

the r/SneerClub archives are finally online! this is an early v1 which contains 1,940 posts grabbed from the Reddit UI using Bulk Downloader for Reddit. this encompasses both the 1000 most recent posts on r/SneerClub as well as a set of popular historical posts

as a v1, you'll notice a lot of jank. known issues are:

  • this won't work at all on mobile because my css is garbage. it might not even work on anyone else's screen; good luck!
  • as mentioned above, only 1,940 posts are in this release. there's a full historical archive of r/SneerClub sourced from pushshift at the archive data git repo (or clone git://these.awful.systems/sneer-archive-data.git); the remaining work here is to merge the BDFR and pushshift data into the same JSON format so the archives can pull in everything
  • markdown is only rendered for posts and first-level comments; everything else just gets the raw markdown. I couldn't figure out how to make miller recursively parse JSON, so I might have to write some javascript for this
  • likewise, comments display a unix epoch instead of a rendered time
  • searching happens locally in your browser, but only post titles and authors are indexed to keep download sizes small
  • speaking of, there's a much larger r/SneerClub archive that includes the media files BDFR grabbed while archiving. it's a bit unmanageable to actually use directly, but is available for archival purposes (and could be included as part of the hosted archive if there's demand for it)

if you'd like the source code for the r/SneerClub archive static site, it lives here (or clone git://these.awful.systems/sneer-archive-site.git)

37
 
 

Been waiting to come back to the steeple of the sneer for a while. Its good to be back. I just really need to sneer, this ones been building for a long time.

Now I want to gush to you guys about something thats been really bothering me for a good long while now. WHY DO RATIONALISTS LOVE WAGERS SO FUCKING MUCH!?

I mean holy shit, theres a wager for everything now, I read a wager that said that we can just ignore moral anti-realism cos 'muh decision theory', that we must always hedge our bets on evidential decision theory, new pascals wagers, entirely new decision theories, the whole body of literature on moral uncertainty, Schwitzgebels 1% skepticism and so. much. more.

I'm beginning to think its the only type of argument that they can make, because it allows them to believe obviously problematic things on the basis that they 'might' be true. I don't know how decision theory went from a useful heuristic in certain situations and economics to arguing that no matter how likely it is that utilitarianism is true you have to follow it cos math, acausal robot gods, fuckin infinite ethics, basically providing the most egregiously smug escape hatch to ignore entire swathes of philosophy etc.

It genuinely pisses me off, because they can drown their opponents in mathematical formalisms, 50 page long essays all amounting to impenetrable 'wagers' that they can always defend no matter how stupid it is because this thing 'might' be true; and they can go off create another rule (something along the lines of 'the antecedent promulgation ex ante expected pareto ex post cornucopian malthusian utility principle) that they need for the argument to go through, do some calculus declare it 'plausible' and then call it a day. Like I said, all of this is so intentionally opaque that nobody other than their small clique can understand what the fuck they are going on about, and even then there is little to no disagreement within said clique!

Anyway, this one has been coming for a while, but I hope to have struck up some common ground between me and some other people here

38
39
 
 

I don't particularly disagree with the piece, but it's striking how little effort is put in to make this resemble a news piece or a typical Vox explainer. It's just blatant editorializing ("Please do this thing I want") and very blatantly carrying water for the--some how non-discredited--EA movement priorities.

40
 
 

he takes a couple pages to explain why he know that sightings of UFOs aren't alien because he can simply infer how superintelligent beings will operate + how advanced their technology is. he then undercuts his point by saying that he's very uncertain about both of those things, but wraps it up nicely with an excessively wordy speech about how making big bets on your beliefs is the responsible way to be a thought leader. bravo

41
42
 
 

@sneerclub

Greetings!

Roko called, just to say he's filed a trademark on Basilisk™ and will be coming after anyone who talks about it for licensing fees which will go into his special Basilisk™ Immanetization Fund and if we don't pay up we'll burn in AI hell forever once the Basilisk™ wakes up and gets around to punishing us.

Also, if you see your mom, be sure and tell her SATAN!!!!—

43
1
Universal Watchtowers (awful.systems)
submitted 1 year ago* (last edited 1 year ago) by dgerard@awful.systems to c/sneerclub@awful.systems
 
 

by Monkeon, from the b3ta Mundane Video Games challenge

44
 
 

Yudkowsky writes,

How can Effective Altruism solve the meta-level problem where almost all of the talented executives and ops people were in 1950 and now they're dead and there's fewer and fewer surviving descendants of their heritage every year and no blog post I can figure out how to write could even come close to making more people being good executives?

Because what EA was really missing is collusion to hide the health effects of tobacco smoking.

45
 
 

This totally true anecdote features a friend who "can't recall the names of his parents [but] remember[s] the one thing he'd be safer forgetting."

46
 
 

Discussion on AI starts at about 17mins. The Bas(ilisk) drop happens at 20:30. Sorry if ads mess up my time stamps. I think this is the second time it’s come up on the show.

47
48
 
 

Source Tweet

@ESYudkowsky: Remember when you were a kid and thought you might have psychic powers, so you dealt yourself face-down playing cards and tried to guess whether they were red or black, and recorded your accuracy rate over several batches of tries?

|

And then remember how you had absolutely no idea to do stats at that age, so you stayed confused for a while longer?


Apologies for the usage of the japanese; but it is a very apt description: https://en.wikipedia.org/wiki/Chūnibyō,

49
 
 

really: https://archive.ph/p0jPI

Roko’s twitter is an absolutely reliable guide to how recently a woman with dyed hair and facial piercings kicked him in the nuts again

50
view more: ‹ prev next ›