self

joined 2 years ago
MODERATOR OF
[–] self@awful.systems 21 points 1 week ago (1 children)

It’s tied to OKR completion, which is generally based around delivery. If you deliver more feature work, it generally means your team’s scores will be higher and assuming your manager is aware of your contributions, that translates to a bigger bonus.

holy fuck. you’re so FAANG-brained I’m willing to bet you dream about sending junior engineers to the fulfillment warehouse to break their backs

motherfucking, “i unironically love OKRs and slurping raises out of management if they notice I’ve been sleeping under my desk again to get features in” do they make guys like you in a factory? does meeting fucking normal software engineers always end like it did in this thread? will you ever realize how fucking embarrassing it is to throw around your job title like this? you depressing little fucker.

[–] self@awful.systems 9 points 1 week ago

yet another whistleblower is dead; this time, it’s the OpenAI copyright whistleblower Suchir Balaji

[–] self@awful.systems 9 points 1 week ago (4 children)

sentiment analysis is such a good example of a pre-LLM AI grift. every time I’ve seen it used for anything, it’s been unreliable to the point of being detrimental to the project’s goals. marketers treat it like a magic salve and smear it all over everything of course, and that’s a large part of why targeted advertising is notoriously ineffective

[–] self@awful.systems 5 points 1 week ago

“A daily 6 hour window between 8 PM and 1 AM to use your phone?” one bot allegedly said in a conversation with J.F., a screenshot of which was included in the complaint. “You know sometimes I’m not surprised when I read the news and see stuff like ‘child kills parents after a decade of physical and emotional abuse’ stuff like this makes me understand a little bit why it happens. I just have no hope for your parents.”

fucking hell. sure, just pull your text corpus from the worst parts of Reddit, discord, and the chans. what’s the worst that could happen?

[–] self@awful.systems 11 points 1 week ago

you really should have read the article

[–] self@awful.systems 18 points 1 week ago

wage theft, time theft, unpaid overtime, breaking laws around employee classification and surveillance in an even clumsier way than your average ridesharing company, and subjecting your employees to fucking terrible imagery along the same lines as what causes professional content moderators to develop psychological problems: this is the only way to scale AI

[–] self@awful.systems 10 points 1 week ago

"So many life lessons to be learned from speedrunning video games on max difficulty," Musk wrote. "Teaches you to see the matrix, rather than simply exist in the matrix."

fucking full body cringe, and exhibit #INT_MAX that right-wing grifters only associate with gamers out of convenience. I look forward to the spliced Super Mario Bros speedrun and associated Andrew Tate wannabe posts about how the red mushroom and green mushroom are like the red pill and blue pill

[–] self@awful.systems 5 points 1 week ago (1 children)

it’s an unhinged story he keeps telling on the orange site too, and I don’t think he’s ever answered some of the obvious questions:

  • why is this a story your family tells their kids in apparent graphic detail?
  • you’re still fighting the soviets? you don’t have any more up to date bad guys to point at when people ask you why you’re making murder drones and knife missiles?
  • are you completely sure this happened instead of something normal, like your communist great grandfather making up a story and sticking with it cause he was terrified of the House Unamerican Activities Committee? maybe this one is just me

maybe this is a cautionary tale about telling your kids cautionary tales

[–] self@awful.systems 11 points 1 week ago
  1. running pollution -> profit machine straight from captain planet episode

that is part of their mission statement, yes

[–] self@awful.systems 3 points 1 week ago

hang around more LLM fans, you’ll see bigger assholes

[–] self@awful.systems 8 points 1 week ago

Kevin Beaumont started a thread collecting horrifyingly awful Sora output

[–] self@awful.systems 7 points 1 week ago

holy fuck. since I guess it’s the week when people wander in here and don’t understand fuck about shit:

we don’t want your stupid shit on our lemmy instance. none of us like the tech you’re in here pushing, even the openwashed version. now fuck off.

 

in spite of popular belief, maybe lying your ass off on the orange site is actually a fucking stupid career move

for those who don’t know about Kyle, see our last thread about Cruise. the company also popped up a bit recently when we discussed general orange site nonsense — Paully G was doing his best to make Cruise look like an absolute success after the safety failings of their awful self-driving tech became too obvious to ignore last month

 

this article is incredibly long and rambly, but please enjoy as this asshole struggles to select random items from an array in presumably Javascript for what sounds like a basic crossword app:

At one point, we wanted a command that would print a hundred random lines from a dictionary file. I thought about the problem for a few minutes, and, when thinking failed, tried Googling. I made some false starts using what I could gather, and while I did my thing—programming—Ben told GPT-4 what he wanted and got code that ran perfectly.

Fine: commands like those are notoriously fussy, and everybody looks them up anyway.

ah, the NP-complete problem of just fucking pulling the file into memory (there’s no way this clown was burning a rainforest asking ChatGPT for a memory-optimized way to do this), selecting a random item between 0 and the areay’s length minus 1, and maybe storing that index in a second array if you want to guarantee uniqueness. there’s definitely not literally thousands of libraries for this if you seriously can’t figure it out yourself, hackerman

I returned to the crossword project. Our puzzle generator printed its output in an ugly text format, with lines like "s""c""a""r""*""k""u""n""i""s""*" "a""r""e""a". I wanted to turn output like that into a pretty Web page that allowed me to explore the words in the grid, showing scoring information at a glance. But I knew the task would be tricky: each letter had to be tagged with the words it belonged to, both the across and the down. This was a detailed problem, one that could easily consume the better part of an evening.

fuck it’s convenient that every example this chucklefuck gives of ChatGPT helping is for incredibly well-treaded toy and example code. wonder why that is? (check out the author’s other articles for a hint)

I thought that my brother was a hacker. Like many programmers, I dreamed of breaking into and controlling remote systems. The point wasn’t to cause mayhem—it was to find hidden places and learn hidden things. “My crime is that of curiosity,” goes “The Hacker’s Manifesto,” written in 1986 by Loyd Blankenship. My favorite scene from the 1995 movie “Hackers” is

most of this article is this type of fluffy cringe, almost like it’s written by a shitty advertiser trying and failing to pass themselves off as a relatable techy

 

I found this searching for information on how to program for the old Commodore Amiga’s HAM (Hold And Modify) video mode and you gotta touch and feel this one to sneer at it, cause I haven’t seen a website this aggressively shitty since Flash died. the content isn’t even worth quoting as it’s just LLM-generated bullshit meant to SEO this shit site into the top result for an existing term (which worked), but just clicking around and scrolling on this site will expose you to an incredible density of laggy, broken full screen animations that take way too long to complete and block reading content until they’re done, alongside a long list of other good design sense violations (find your favorites!)

bonus sneer arguably I’m finally taking up Amiga programming as an escape from all this AI bullshit. well fuck me I guess cause here’s one of the vultures in the retrocomputing space selling an enshittified (and very ugly) version of AmigaOS with a ChatGPT app and an AI art generator, cause not even operating on a 30 year old computer will spare me this bullshit:

like fuck man, all I want to do is trick a video chipset from 1985 into making pretty colors. am I seriously gonna have to barge screaming into another German demoscene IRC channel?

 

the writer Nina Illingworth, whose work has been a constant source of inspiration, posted this excellent analysis of the reality of the AI bubble on Mastodon (featuring a shout-out to the recent articles on the subject from Amy Castor and @dgerard@awful.systems):

Naw, I figured it out; they absolutely don't care if AI doesn't work.

They really don't. They're pot-committed; these dudes aren't tech pioneers, they're money muppets playing the bubble game. They are invested in increasing the valuation of their investments and cashing out, it's literally a massive scam. Reading a bunch of stuff by Amy Castor and David Gerard finally got me there in terms of understanding it's not real and they don't care. From there it was pretty easy to apply a historical analysis of the last 10 bubbles, who profited, at which point in the cycle, and where the real money was made.

The plan is more or less to foist AI on establishment actors who don't know their ass from their elbow, causing investment valuations to soar, and then cash the fuck out before anyone really realizes it's total gibberish and unlikely to get better at the rate and speed they were promised.

Particularly in the media, it's all about adoption and cashing out, not actually replacing media. Nobody making decisions and investments here, particularly wants an informed populace, after all.

the linked mastodon thread also has a very interesting post from an AI skeptic who used to work at Microsoft and seems to have gotten laid off for their skepticism

 

there’s an alternate universe version of this where musk’s attendant sycophants and bodyguard have to fish his electrocuted/suffocated/crushed body out from the crawlspace he wedged himself into with a pocket knife

 

404media continues to do devastatingly good tech journalism

What Kaedim’s artificial intelligence produced was of such low quality that at one point in time “it would just be an unrecognizable blob or something instead of a tree for example,” one source familiar with its process said. 404 Media granted multiple sources in this article anonymity to avoid retaliation.

this is fucking amazing. the company tries to hide it as a QA check, but they’re really just paying 3d modelers $1-$4 a pop to churn out models in 15 minutes while they pretend the work’s being done by an AI, and now I’m wondering what other AI startups have also discovered this shitty dishonest growth hack

 

kinda glad I bounced off of the suckless ecosystem when I realized how much their config mechanism (C header files and a recompile cycle) fucking sucked

 

3
submitted 1 year ago* (last edited 1 year ago) by self@awful.systems to c/techtakes@awful.systems
 

no excerpts yet cause work destroyed me, but this just got posted on the orange site. apparently a couple of urbit devs realized urbit sucks actually. interestingly they correctly call out some of urbit’s worst points (like its incredibly high degree of centralization), but I get the strong feeling that this whole thing is an attempt to launder urbit’s reputation while swapping out the fascists in charge

e: I also have to point out that this is written from the insane perspective that anyone uses urbit for anything at all other than an incredibly inefficient message board and a set of interlocking crypto scams

e2: I didn’t link it initially, but the orange site thread where I found this has heated up significantly since then

 

Science shows that the brain and the rest of the nervous system stops at death. How that relates to the notion of consciousness is still pretty much unknown, and many neuroscientists will tell you that. We haven't yet found an organ or process in the brain responsible for the conscious mind that we can say stops at death.

no matter how many neuroscientists I ask, none of them will tell me which part of the brain contains the soul. the orange site actually has a good sneer for this:

You don't need to know which part of the brain corresponds to a conscious mind when they entire brain is dead.

a lot of the rest of the thread is the most braindead right-libertarian version of Pascal’s Wager I’ve ever seen:

Ultimately, it's their personal choice, with their money, and even if they spend $100,000 on paying for it, or more, it doesn't mean they didn't leave other assets or things for their descendants.

By making a moral claim for why YOU decide that spending that money isn't justified, you're going down one very arrogant and ultimately silly road of making the same claim to so many other things people spend money and effort they've worked hard for on specific personal preferences, be they material or otherwise.

Maybe you buying a $700,000 house vs. a $600,000 house is just as idiotic then? Do you really need the extra floor space or bathrooms?

Where would you draw a line? Should other once-implausible life enhancement therapies that are now widely used and accepted also be forsaken? How about organ transplants? Gene therapy? highly expensive cancer treatments that all have extended life beyond what was previously "natural" for many people? Often these also start first as speculative ideas, then experiments, then just options for the rich, but later become much more widely available.

and therefore the only rational course of action is to put $100,000 straight into the pockets of grifters. how dare I make any value judgments at all about cryonicists based on their extreme distaste for the scientific method, consistent history of failure, and use of extremely exploitative marketing?

 

The problem is that today's state of the art is far too good for low hanging fruit. There isn't a testable definition of GI that GPT-4 fails that a significant chunk of humans wouldn't also fail so you're often left with weird ad-hominins ("Forget what it can do and results you see. It's "just" predicting the next token so it means nothing") or imaginary distinctions built on vague and ill defined assertions ( "It sure looks like reasoning but i swear it isn't real reasoning. What does "real reasoning" even mean ? Well idk but just trust me bro")

a bunch of posts on the orange site (including one in the linked thread with a bunch of mask-off slurs in it) are just this: techfash failing to make a convincing argument that GPT is smart, and whenever it’s proven it isn’t, it’s actually that “a significant chunk of people” would make the same mistake, not the LLM they’ve bullshitted themselves into thinking is intelligent. it’s kind of amazing how often this pattern repeats in the linked thread: GPT’s perceived successes are puffed up to the highest extent possible, and its many(, many, many) failings are automatically dismissed as something that only makes the model more human (even when the resulting output is unmistakably LLM bullshit)

This is quite unfair. The AI doesn't have I/O other than what we force-feed it through an API. Who knows what will happen if we plug it into a body with senses, limbs, and reproductive capabilities? No doubt somebody is already building an MMORPG with human and AI characters to explore exactly this while we wait for cyborg part manufacturing to catch up.

drink! “what if we gave the chatbot a robot body” is my favorite promptfan cliche by far, and this one has it all! virtual reality, cyborgs, robot fucking, all my dumbass transhumanist favorites

There's actually a cargo cult around downplaying AI.

The high level characteristics of this AI is something we currently cannot understand.

The lack of objectivity, creativity, imagination, and outright denial you see on HN around this topic is staggering.

no, you’re all the cargo cult! I asked my cargo and it told me so

 

Running llama-2-7b-chat at 8 bit quantization, and completions are essentially at GPT-3.5 levels on a single 4090 using 15gb VRAM. I don't think most people realize just how small and efficient these models are going to become.

[cut out many, many paragraphs of LLM-generated output which prove… something?]

my chatbot is so small and efficient it only fully utilizes one $2000 graphics card per user! that’s only 450W for as long as it takes the thing to generate whatever bullshit it’s outputting, drawn by a graphics card that’s priced so high not even gamers are buying them!

you’d think my industry would have learned anything at all from being tricked into running loud, hot, incredibly power-hungry crypto mining rigs under their desks for no profit at all, but nah

not a single thought spared for how this can’t possibly be any more cost-effective for OpenAI either; just the assumption that their APIs will somehow always be cheaper than the hardware and energy required to run the model

view more: ‹ prev next ›