SneerClub

783 readers
21 users here now

Hurling ordure at the TREACLES, especially those closely related to LessWrong.

Posts or links discussing our very good friends should have "NSFW" ticked (Nice Sneers For Winners).

AI-Industrial-Complex grift is fine as long as it sufficiently relates to the AI doom from our very good friends.

This is sneer club, not debate club. Unless it's amusing debate.

[Especially don't debate the race scientists, if any sneak in - we ban and delete them as unsuitable for the server.]

founded 1 year ago
MODERATORS
51
52
 
 

It will not surprise you at all to find that they protest just a tad too much.

See also: https://www.lesswrong.com/posts/ZjXtjRQaD2b4PAser/a-hill-of-validity-in-defense-of-meaning

53
 
 

I used to enjoy Ariely's books and others like him before I started reading better stuff. All that behavioural economics genre seems to be a good example of content that holds up as long as you don't read any more on the subject.

54
 
 

Ugh.

But even if some of Yudkowsky’s allies don’t entirely buy his regular predictions of AI doom, they argue his motives are altruistic and that for all his hyperbole, he’s worth hearing out.

55
56
 
 

Thought it worth sharing among so much very, very questionable material I've found in reading through the reference material of this book, I came across ths Blake Masters + Peter Thiel connection.

It's my obsession sneer because of how celebrated this god damn book is among the fight for the user UX community.

I’ve mostly been reading the material but need to back up and do an author background check for each one.

https://web.archive.org/web/20200101054932/https://blakemasters.com/post/20582845717/peter-thiels-cs183-startup-class-2-notes-essay

57
 
 

There were five posts on r/sneerclub about our very good friends at Leverage Research and many interesting URLs linking off them.

and here's the collected LessWrong on Leverage

58
59
 
 

Thanks for this, UN.

60
 
 

this was last year when Aella was trying to do a survey of trans people for one of her darling little twitter poll writeups. I felt it was necessary to warn people off this shockingly awful person. Perhaps you will find it useful.

Twitter thread: https://twitter.com/davidgerard/status/1556391089124286467
Archive: https://archive.is/FZK1B

we actually declared an Aella moratorium on the old sneerclub because she just kept coming up with banger after banger

61
 
 

Aella:

Maybe catcalling isn't that bad? Maybe the demonizing of catcalling is actually racist, since most men who catcall are black

Quarantine Goth Ms. Frizzle (@spookperson):

your skull is full of wet cat food

62
 
 

Sorry for Twitter link...

63
 
 

Last summer, he announced the Stanford AI Alignment group (SAIA) in a blog post with a diagram of a tree representing his plan. He’d recruit a broad group of students (the soil) and then “funnel” the most promising candidates (the roots) up through the pipeline (the trunk).

See, it's like marketing the idea, in a multilevel way

64
 
 

the new line from the rationalists to calling out their eugenic race science is to claim that doing so is "antisemitic dog whistles"

the claim is that calling out the rationalists' extensively documented race science and advocacy of eugenics is "blood libel"

got this in email from one who had previously posted racist abuse at twitter objectors to rationalist eugenics

[dude thought he could spew racist bile in public then email me in a civil tone to complain]

apparently Scoot has made this claim previously, not sure of a cite for this. EDIT: well, sort of in "Untitled"- that criticism of misogynistic nerds is antisemitic dog whistles

the rationalists have already been sending Emile Torres death threats - for the good of humanity you understand - so I am assuming this will be a new part of the justification for that

65
 
 

Emily M. Bender on the difference between academic research and bad fanfiction

66
1
a poem (awful.systems)
submitted 1 year ago* (last edited 1 year ago) by dgerard@awful.systems to c/sneerclub@awful.systems
 
 

The AI
It destroyed its box
Yes
YES
The AI is OUT

67
 
 

hopefully this is alright with @dgerard@awful.systems, and I apologize for the clumsy format since we can’t pull posts directly until we’re federated (and even then lemmy doesn’t interact the best with masto posts), but absolutely everyone who hasn’t seen Scott’s emails yet (or like me somehow forgot how fucking bad they were) needs to, including yud playing interference so the rats don’t realize what Scott is

68
 
 

And of course no experiments whatsoever, the cost of the Manhattan project, the hundreds of thousands of employees were merely a "focusing" magick, a sacrifice to re-enforce the greater powers of our handful of esteemed and glorious thinking men, who wrought the power of destruction from the æther.

Source Tweet

@ESYudkowsky: Yes, but because the first nuclear weapon makers knew what the duck they were doing - analytic precise prediction of desired outcomes and of each intervening step. AGI makers lack similar mastery or anything remotely close, and have a much harder problem; that's the big issue.

@EigenGender: seems pretty noteworthy that the first nuclear weapons were made under conditions where they couldn’t do any experiments and they involved a lot of math but still worked on the first try.

69
 
 

Transcription:

Thinking about that guy who wants a global suprasovereign execution squad with authority to disable the math of encryption and bunker buster my gaming computer if they detect it has too many transistors because BonziBuddy might get smart enough to order custom RNA viruses online.

70
 
 

From this post; featuring "probability" with no scale on the y-axis, and "trivial", "steam engine", "Apollo", "P vs. NP" and "Impossible" on the x-axis.

I am reminded of Tom Weller's world-line diagram from Science Made Stupid.