this post was submitted on 21 Jul 2024
32 points (100.0% liked)

TechTakes

1490 readers
28 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
 

Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(page 2) 50 comments
sorted by: hot top controversial new old
[–] mii@awful.systems 12 points 5 months ago (8 children)

https://xcancel.com/jrawson/status/1814232925967089808

The fuck.

Reposting this from the last Stubsack here by request.

[–] Soyweiser@awful.systems 11 points 5 months ago

Amazing how many of them eventually morph into somebody else. Trained on celeb faces, and still cannot keep a consistent facial shape. One more dataset bro.

load more comments (7 replies)
[–] BlueMonday1984@awful.systems 11 points 5 months ago (4 children)
load more comments (4 replies)
[–] BlueMonday1984@awful.systems 11 points 5 months ago (2 children)
load more comments (2 replies)
[–] maol@awful.systems 11 points 5 months ago* (last edited 5 months ago) (2 children)

Just found out from a screenshot in a tweet that Marc Aandreesen considers Nick Land to be some kind of patron saint of "techno-optimism". Setting aside Land's ugly views about everything... I didn't think optimism was what he was known for. More like grasping, desperate, disgusted embrace of the onward march of capital.

[–] V0ldek@awful.systems 11 points 5 months ago

Nick Land (...) “techno-optimism”

His thing is literally called DARK ENLIGHTENMENT, fucking Final Fantasy villain level of grandiose menacing naming, how on earth would that be "optimism".

Marcee do you even like know who your idols are?

[–] Soyweiser@awful.systems 10 points 5 months ago (1 children)

"Nothing human makes it out of the near-future." - Nick Land.

load more comments (1 replies)
[–] dgerard@awful.systems 10 points 5 months ago

AP takes down CF Vance apologia, not asserting things that they cannot evidence (that Vance did not have sexual relations with that couch)

before: https://archive.is/fXiMc after: https://archive.is/j3aot

This is the journalistic integrity we expect of AP News

[–] BlueMonday1984@awful.systems 10 points 5 months ago* (last edited 5 months ago) (5 children)

Not a sneer, but a mildly interesting open letter:

A specification for those who want content searchable on search engines, but not used for machine learning.

The basic idea is effectively an extension of robots.txt which attempts to resolve the issue by providing a means to politely ask AI crawlers not to scrape your stuff.

Personally, I don't expect this to ever get off the ground or see much usage - this proposal is entirely reliant on trusting that AI bros/companies will respect people's wishes and avoid scraping shit without people's permission.

Between OpenAI publicly wiping their asses with robots.txt, Perplexity lying about user agents to steal people's work, and the fact a lot of people's work got stolen before anyone even had the opportunity to say "no", the trust necessary for this shit to see any public use is entirely gone, and likely has been for a while.

load more comments (5 replies)
[–] dgerard@awful.systems 10 points 5 months ago (8 children)

after all this time, TIL that Roko pronounces his name "Rocko" https://www.youtube.com/watch?v=VIwJDnej7pg

[–] Architeuthis@awful.systems 11 points 5 months ago (2 children)

I'm not a native english speaker, how else was he supposed to pronounce it?

load more comments (2 replies)
load more comments (7 replies)
[–] blakestacey@awful.systems 10 points 5 months ago (6 children)

Regarding that claimed breakthrough about AI winning the International Mathematical Olympiad: a reminder that a proof which hangs together logically is not necessarily a proof that makes sense.

Those formalized proofs are so incredibly ugly, it's amazing. Of course it doesn't much of a sensible indentation, but then there are single proof steps where I have no idea what it's even doing. [...] And then there are nonsense mathematical steps. The solution of problem 2 starts with induction, before introducing any variables. It applies induction to the number 12. And it write 12 as (10)+2. Then it proceeds to do the whole proof in the base case of the induction, and notices that the induction step is trivial, since the goal is the same as the induction hypothesis (but instead of the assumption tactic it uses congr 26).

[–] BigMuffin69@awful.systems 11 points 5 months ago

Also, choice sneer in the comments:

AlphaProof is more "AlphaZero doing self play against Lean" and less "Gemeni reading human proofs"

load more comments (5 replies)
[–] dgerard@awful.systems 9 points 5 months ago (4 children)
[–] o7___o7@awful.systems 11 points 5 months ago (1 children)

How do people do this without dying of anxiety?

BTW, that last comment on RetractionWatch was grim:

I recently saw a presentation by a job candidate (a PhD with many years experience) who used figures that were clearly AI-generated. They weren’t as funny as this example or the giant rat penis, but certainly fictional and unreal. In and of itself, troubling enough. Worse was that my colleagues involved in the interview didn’t care when I pointed it out.

[–] skillissuer@discuss.tchncs.de 10 points 5 months ago

it's easier if you are a shameless fraud without a shred of integrity

This is also probably why MBAs like it

load more comments (3 replies)
load more comments
view more: ‹ prev next ›