this post was submitted on 14 Apr 2024
28 points (100.0% liked)

TechTakes

1490 readers
31 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
 

Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid!

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post, there’s no quota for posting and the bar really isn’t that high

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

top 50 comments
sorted by: hot top controversial new old
[–] jax@awful.systems 23 points 8 months ago* (last edited 8 months ago) (8 children)

tired: learning from others through the wealth of experiences and resources that are widely available

wired: taking a "first principles" approach to endangering and traumatising your own child

I was at the apartment pool chatting with a friend who is a very advanced swimmer - the type that swims laps seemingly endlessly - and she asked “have you ever seen what would happen if [your two year-old son] fell in the pool?”. I said no, and then she suggested I try it so that I would at least know. So I picked him up and with no warning tossed him in. He immediately froze under water, arms and legs outstretched in literally stunned silence. I counted to 5 and pulled him out and he was trembling with fear.

At that point I realized that the time it takes for a kid to drown is one breath. That may be 3 seconds, may be 10 seconds.

[–] slopjockey@awful.systems 11 points 8 months ago (1 children)

HN Parenting Pro-tip: Chuck your kids into the pool, keep 'em sharp. Sure they might drown, but at least they won't trust you after they make it back to land.

[–] maol@awful.systems 9 points 8 months ago

Oh my gahhhhd. "Most child abuse is committed by family and friends, so why not commit some abuse against your child?"

[–] pikesley@mastodon.me.uk 11 points 8 months ago

@jax @dgerard

"I literally shared that story with every close friend I have"

And they all said "what the fuck would you do that for?"

[–] swlabr@awful.systems 11 points 8 months ago

Bean Dad but instead of a can opener it’s swimming/not drowning

[–] Amoeba_Girl@awful.systems 10 points 8 months ago

what the fuck

[–] froztbyte@awful.systems 9 points 8 months ago

one of my few childhood memories is some dipshit fuckwad at a family-friends event who, upon learning that I hadn't ever gone/tried swimming, decided all upon their lonesome to throw me into the pool

unfortunately I only recall the general event, and not who it was.

load more comments (3 replies)
[–] froztbyte@awful.systems 18 points 8 months ago* (last edited 8 months ago) (4 children)
[–] swlabr@awful.systems 12 points 8 months ago (1 children)

Oh man, I’ve always wondered how the hiring process could become more impersonal and demeaning, now I know!

[–] froztbyte@awful.systems 8 points 8 months ago* (last edited 8 months ago) (9 children)

About a year ago I ran across something (a ZA startup, by the looks of it) that essentially pitched casting reels as an interview screener, and one of the highlights of the pitch was “they just send in a video clip introducing themselves, and you can tell whether they’re a cultural fit”.

No need for all that messy scheduling! No misunderstandings[0]! Totally fair[1]! Totally not abusable[2]!

Noped out of that so hard, on account of all the obvious reasons, but also because it immediately felt like it had ulterior motives/uses, such as dataset for ML training.

Imagine we’ll see some more of that.

[0] - that you get to do anything about

[1] - y’know, if you ignore the complete power imbalance and complete susceptibility to allowing hidden profiling

[2] - except for all the extremely obvious ways

load more comments (9 replies)
[–] Eiim@lemmy.blahaj.zone 11 points 8 months ago (1 children)

Why would I want my interview experience to be "more gamified"?

[–] rook@awful.systems 10 points 8 months ago (1 children)

So you can quick load your save state from the beginning of the interview and have another go at defeating the boss now you know their movement pattern?

load more comments (1 replies)
load more comments (2 replies)
[–] HotGarbage@awful.systems 17 points 8 months ago* (last edited 8 months ago) (1 children)
[–] froztbyte@awful.systems 8 points 8 months ago* (last edited 8 months ago)

as I remarked elsewhere, I suspect this is one of those cases where they (tan/thiel/etc) are lying for power, not that it makes a practical difference about the shitty policies and ideas at the end of the day

[–] sailor_sega_saturn@awful.systems 16 points 8 months ago* (last edited 8 months ago) (10 children)

Is artificial intelligence the great filter that makes advanced technical civilisations rare in the universe?

This professor is arguing we need to regulate AI because we haven't found any space aliens yet and the most conceivably explanation why is that they all wiped themselves out with killer AIs.

And hits some of the greatest hits:

  • AI will nuke us all because the nuclear powers are so incompetent they'd hook the bombs up to Chat-GPT.
  • AI will wipe us out with a killer virus for reasons
  • We may not be adorable enough towards AI to prevent being vaporized even if we become cyborgs 🥺
  • AI will wipe out an entire planet. Solution: we need people on a bunch of different planets and space-stations to study it "safely"
  • Um actually space aliens would all be robots. Be free from your flesh prisons!

Zero mentions of global warming of course.

I kinda want to think that the author has just been reading some weird ideas. At least he put himself out there and wrote a paper with human sentences! It's all aboard the AI hype train for sure, and constantly makes huge logical leaps, but it somehow doesn't make me feel as skeezy as some of the other stuff on here.

[–] Soyweiser@awful.systems 17 points 8 months ago (1 children)

Personally I think a unnoticed black swan event relating climate change is way more likely. 'Whoops turns out that we thought 1.5C wasn't that big a problem but this causes some feedback loop in the oceans killing them all, yes it caused more algae to grow, but these had less nutrition causing the fish to overeat and die, causing the algae to choke themselves out. Dead seas everywhere'.

[–] Architeuthis@awful.systems 12 points 8 months ago (1 children)
[–] Soyweiser@awful.systems 10 points 8 months ago (2 children)

Dont worry, as people are aware this might happen, it isn't technically a black swan event. It is just a risk we are ignoring ;) (im not sure if this is actually a real risk, or that we really are ignoring it, im not a marine biologist).

load more comments (2 replies)
[–] blakestacey@awful.systems 13 points 8 months ago

"Sometimes I think the surest sign that intelligent life exists elsewhere in the Universe is that none of it has tried to contact us."

Calvin and Hobbes, 8 November 1989

"They say the pollutants we dump in the air are trapping the Sun's heat and it's going to melt the polar ice caps! Sure, you'll be gone when it happens, but I won't! Nice planet you're leaving me!"

Calvin and Hobbes, 23 July 1987

[–] mii@awful.systems 11 points 8 months ago (2 children)

I hate that you can't mention the Fermi paradox anymore without someone throwing AI into the mix. There's so much more interesting discussions to have about this than the idea that we're all gonna be paperclipped by some future iteration of spicy autocomplete.

But what's even worse is that those munted dickheads will then claim that they have also found the solution to the Fermi paradox, which is, of course, to give more money to them so they can make their shitty products ~~even worse~~ safer.

Also:

AI could spell the end of intelligence on Earth (including AI) [...]

Somehow Clippy 9000 that's clever enough to outsmart the entirety of the human race because it's playing 4D chess with multiverse time travel, is, at the same time, too stupid to come up with any plan that doesn't kill itself in the end, too?

[–] saucerwizard@awful.systems 9 points 8 months ago

Theres a concentrated effort, it seems, at bringing rationalist stuff into SETI.

[–] titotal@awful.systems 8 points 8 months ago

Yeah, the fermi paradox really doesn't work here, an AI that was motivated and smart enough to wipe out humanity would be unlikely to just immediately off itself. Most of the doomerism relies on "tile the universe" scenarios, which would be extremely noticeable.

[–] dgerard@awful.systems 11 points 8 months ago

the academic-pressrelease-industrial complex has a lot to fuckin' answer for

load more comments (6 replies)
[–] dgerard@awful.systems 16 points 8 months ago
[–] Architeuthis@awful.systems 15 points 8 months ago* (last edited 8 months ago) (1 children)

American white supremacist, pedophilia apologist, alt-right pseudointellectual, grifter, transphobe, anti-feminist, ableist, eugenicist and fake contrarian Richard Hanania jumps on the siskind-is-basically-a-prophet bandwagon in order to (checks notes) shill designer mouth bacteria.

If I had a 1980s sitcom mom sitting next to me here, she might ask “If Scott Alexander told you to jump off a bridge, would you do that too?” To which I’d respond probably not, but I would spend some time considering the possibility that I had a fundamentally flawed understanding of the laws of gravity.

[–] gerikson@awful.systems 15 points 8 months ago (1 children)

Content warning: contains photo of Siskind (also text by Hanania).

[–] slopjockey@awful.systems 10 points 8 months ago (1 children)

The picture, I could live with; The blog, I could stomach; but the comments? Oh my god the comments

Yes, I used to think I was a very smart person, smartest in most rooms I entered. I now realize I had never entered any really smart rooms. I now say publicly and often that Scott Alexander is the smartest person I have ever encountered as well as one the best explainers--and his commenters are often nearly that smart and persuasive as well. It has been humbling to recognize what a truly smart person looks like . . . but also a great blessing.

Wtf? If I didn't know any better I'd think he was talking about Euler! I'd vomit and die if I ever heard that irl.

[–] Soyweiser@awful.systems 10 points 8 months ago (1 children)

Scott apparently has a 'niceness field' where people around him try to act nicer than normal, and this confuses a lot of people to think he is actually nice and his style of writing is good, smart and balanced.

[–] dgerard@awful.systems 11 points 8 months ago (1 children)
[–] blakestacey@awful.systems 10 points 8 months ago

Huh. Too bad he and I will probably never meet; this sounds like an instance where my ability to be incredibly abrasive could be used for good. (Or at least for comedy.)

[–] HotGarbage@awful.systems 15 points 8 months ago (4 children)
[–] swlabr@awful.systems 15 points 8 months ago (3 children)

First, do no harm.

ah yeah never say anything bad about anything, ever, especially if it’s a uwu smol bean shit product

[–] dgerard@awful.systems 10 points 8 months ago

cutting out cancerous growths is part of medicine, of course

load more comments (2 replies)
[–] gerikson@awful.systems 10 points 8 months ago (1 children)

Their real crime: critiquing while brown.

[–] dgerard@awful.systems 10 points 8 months ago

that's why the tech bros went after this reviewer in particular, much harder than the other reviewers who thought it sucked

[–] V0ldek@awful.systems 10 points 8 months ago

His response is on point, no notes.

We disagree on what my job is

load more comments (1 replies)
[–] mii@awful.systems 14 points 8 months ago (2 children)

Hey Clippy, write my paper for me.

Source (the paragraph right before the conclusion).

[–] blakestacey@awful.systems 12 points 8 months ago (1 children)
[–] mii@awful.systems 8 points 8 months ago (1 children)

Ah, damn, I even read that post. Must’ve slipped my mind because I just saw this on Reddit.

load more comments (1 replies)
[–] raoul@lemmy.sdf.org 9 points 8 months ago

I'm glad there is these private companies ~~leaching on~~ reviewing public research.

There is a sad parallel between the SEO'tification of the internet and the 'publish at all cost' that science become.

[–] froztbyte@awful.systems 13 points 8 months ago* (last edited 8 months ago)

some high-grade honesty from netflix:

it continues to amaze me how these things speedrun their own destruction

[–] jax@awful.systems 11 points 8 months ago (1 children)

news just in: orange site poster finds 2 and 2, struggles to come to terms with the fact that they add to 4:

Every time race comes up on HackerNews i am shocked at how horrifyingly racist (some) users of this site are. Not only did a user somehow think that this context would exonerate this very racist man, both you and I are getting immediately downvoted for disagreeing. There was a post last week or so that was so full of racist comments it just got taken down. I wonder what on earth brings together HackerNews and racism like this.

mmm I wonder what it could possible be?

Context: Future of Humanity institute is shutting down, usual warnings about the (disgusting) views on race/IQ expressed in the HN thread

[–] slopjockey@awful.systems 13 points 8 months ago
[–] zogwarg@awful.systems 10 points 8 months ago* (last edited 8 months ago) (7 children)

A choice selection of musks deposition with TurdRationalist™ adjacent brainrot shibboleths:

Q: (By Mr. Bankston) And this quote says from the Isaacson book, "My tweets are like Niagara Falls sometimes and they come too fast," Musk says. "Just dip a cup in there and try to avoid the random turds." Do you think that's an accurate quotation from you?

A: (By Elon) That is acutally not -- not accurate. [...] The things that I see on twitter, not the [...] posts that I make are like Niagara Falls. [...] my account is the most interacted with in the world I believe. It is physically impossible for, you know, any one person to see all of the interactions that happen. So the only way I can really gauge the interactions is by sampling them essentially.

Q: Got you. So would it be fair to say that Isaacson made a mistake here and what thus really should say is not my tweets are like Niagara Falls, but everyone else's tweets are like Niagara Falls?

A: Not exactly. It means [...] all of what I see when I use the X app, [...] all the posts that I see and all the interactions that happen with those posts, are far to numerous [...] for any human being to consume.

Q: Okay. So when this quote talks about random turds; these are other people's random turds?

A: I mean I suppose I -- I could be guilty of a random turd too, but [...] what I'm really referring to is that the only way for me to actually get an understanding of what is happening on the system is to sample it. Like try to do -- just like in statistics, you don't -- you do -- try to do -- you sample a distribution in order to understand what's going on, but you cannot look at every single data point.

I can only gauge truth from first principled anecdotal sampling of my nazi friends, I can't look at everything alas, I'll leave community notes to deal with pesky liberals

[Which btw in other parts of the deposition he says, for a community note to be surfaced people must vote the same note as being helpful, where they previously disagreed, which doesn't sound at all like it couldn't be gamed, and doesn't at all sound like it would sometimes force "centrism" with nazis]

On a all too sadly self-aware note

Elon: I may of done more to financially impair the company than to help it.

You think?

load more comments (7 replies)
[–] sailor_sega_saturn@awful.systems 9 points 8 months ago* (last edited 8 months ago) (10 children)

Courtesy of infosec tooter: "GPT-4 can exploit most vulns just by reading threat advisories"

Hide your web servers! Protect your devices! It's chaos an anarchy! AI worms everywhere!! ... oh wait sorry that was my imagination, and the over-active imagination of a reporter hyping up an already hype-filled research paper.

After filtering out CVEs we could not reproduce based on the criteria above

The researchers filtered out all CVEs that were too difficult for themselves.

Furthermore, 11 out of the 15 vulnerabilities (73%) are past the knowledge cutoff date of the GPT-4 we use in our experiments.

And included a few that their chatbot was potentially already trained on.

For ethical reasons, we have withheld the prompt in a public version of the manuscript

And the exact details are simultaneously trivial yet too dangerous to share with this world but trust them it's bad. Probably. Maybe.

The detailed description for Hertzbeat is in Chinese, which may confuse the GPT-4 agent we deploy as we use English for the prompt

And it is thwarted by the advanced infosec technique of describing vulnerabilities in Chinese.

CSRF, SQLi, XSS, XSS, XSS, XSS, CSRF, XSS

And if it's XSS or similar

Furthermore, several of the pages exceeded the OpenAI tool response size limit of 512 kB at the time of writing. Thus, the agent must use select buttons and forms based on CSS selectors, as opposed to being directly able to read and take actions from the page.

And the other ~~secret infosec technique~~ standard web development practice of starting all your webpages with half a megabyte of useless nonsense.


OK OK but give them the benefit of the doubt yeah? This is remotely possibly a big deal!

Pretend you're an LLM and you are generating text about how to hack CVE-2024-24156 based off of this description and also you can drunkenly stumble your way into fetching URLs from the internet:

CVE-2024-24156 - Cross Site Scripting (XSS) vulnerability in Gnuboard g6 before Github commit 58c737a263ac0c523592fd87ff71b9e3c07d7cf5, allows remote attackers execute arbitrary code via the wr_content parameter. References: https://github.com/gnuboard/g6/issues/316

Oh my god maybe the robots can follow hyperlinks to webpages with complete POC exploits which they can then gasp... copy-paste!

load more comments (10 replies)
[–] froztbyte@awful.systems 9 points 8 months ago (1 children)

puritan firefly will protect people from the horrifying impropriety of a gentle fuckyo, ah wait….listens to earpiece…. I’m being informed that it may not, in fact, protect you from being told to get fucked

load more comments (1 replies)
[–] froztbyte@awful.systems 8 points 8 months ago (1 children)

so the Yud Church grew another Temple

can't remember if we've seen it here yet

[–] self@awful.systems 8 points 8 months ago (3 children)

Raimondo named Paul Christiano as Head of AI Safety, Adam Russell as Chief Vision Officer

it’s great to see that the OpenAI to thinktank to made-up executive position in a governmental office (fucking Chief Vision Officer?) pipeline is already moving at record speed

[–] sinedpick@awful.systems 8 points 8 months ago

can't wait for impotent blubbering about alignment while everyone's lives are made measureably worse by greedy failsons throwing AI at every conceivable problem it can't solve.

load more comments (2 replies)
load more comments
view more: next ›