self

joined 2 years ago
MODERATOR OF
[–] self@awful.systems 8 points 4 months ago

that’s awesome! designing entertaining systems has always been a challenge for me every time I’ve attempted a game project. it’s always a good feeling when things start working though!

[–] self@awful.systems 9 points 4 months ago

it really didn’t take long for OpenAI to enter its binance era. just make cryptic non-statements and boost conspiracy theories and watch your stock price go up!

[–] self@awful.systems 10 points 4 months ago (2 children)

it’s absolutely fucked that the grammar and tone of Google’s response is so casual, and the proposed non-solution is so worthless, the whole post looks like parody — I was convinced @sailor_sega_saturn@awful.systems must have been paraphrasing til I saw the screenshot of the original post and found out it was just a straight copy and paste

sorry you didn’t like that we implied you’re a child murderer. no takesie-backsies though! fill out this form we won’t read if you’re really sure you didn’t like our fun experiment!

I wonder if giving responses so fucking ridiculous they look like a joke is a tactic on Google’s part to make complaints about the shit they’re doing look ridiculous and overdramatic by association

[–] self@awful.systems 5 points 4 months ago (1 children)

I mean, in the worst case, we find out if these database backups are worth a damn? but realistically, we see so much spam from activitypub already that it should be hard to make things fall over*

  • unless you know which parts to push on, then it’s instant
[–] self@awful.systems 11 points 4 months ago (1 children)

type systems are censorship. proof assistants? how dare you imply I would need to prove anything

…fuck, I’m flashing back to the one time a Verilog developer told me formal verification wasn’t real because mathematicians don’t understand engineering

[–] self@awful.systems 4 points 4 months ago (3 children)

that’s a good question! I’ll probably have to brainstorm this with @dgerard@awful.systems later today. in the meantime, is there any precedent for how to do it on Mastodon? we might be able to adopt whatever they do — or maybe at the very least, if there’s a good way to do it there, we could link to a Mastodon thread for the bracket and keep discussion on here.

[–] self@awful.systems 7 points 4 months ago

I’d really like to know too, especially given how many times we’ve already seen LLMs misused in scientific settings. it’s starting to feel like the LLM people don’t have that notion — but that’s crazy, right?

[–] self@awful.systems 13 points 4 months ago (4 children)

like fuck, all you or I want out of these wandering AI jackasses is something vaguely resembling a technical problem statement or the faintest outline of an algorithm. normal engineering shit.

but nah, every time they just bullshit and say shit that doesn’t mean a damn thing as if we can’t tell, and when they get called out, every time it’s the “well you ¡haters! just don’t understand LLMs” line, as if we weren’t expecting a technical answer that just never came (cause all of them are only just cosplaying as technically skilled people and it fucking shows)

[–] self@awful.systems 9 points 4 months ago (1 children)

uh huh

it’s fucking amazing, all these words and you’ve managed to post exactly zero facts. time for you to fuck off

[–] self@awful.systems 10 points 4 months ago (6 children)

We’d be better off not trying to censor it

this claim keeps getting brought up and every time it doesn’t seem to mean a damn thing, particularly since no, censoring the output of an LLM doesn’t do anything to its ability to predict text. censoring its training set would, but seeing as the topic of this thread is a fact an LLM fabricated by being just a dumb text predictor — there’s no real way to censor the training set to prevent this, LLMs are just shitty.

I summarize all of that by saying AI is a useful tool

trying to find a use case for this horseshit has broken your brain into thinking these worthless tools would have value if only they weren’t “being censored” or whatever cope you gleaned from the twitter e/accs

[–] self@awful.systems 11 points 4 months ago

It would be a simple matter to have it summarize the output it’s about to give you and dump the output of it paints the subject in a negative light.

“it can’t be that stupid, you must be prompting it wrong”

[–] self@awful.systems 9 points 4 months ago (3 children)

Also, I’m shockingly infuriated that the tech workers that would end up being the ones replaced the soonest are so busy licking boots rather than throwing their shoes into the machinery.

so much of our industry is dedicated to ensuring that tech workers, most of whom consider themselves experts on complex systems, never analyze or try to influence the social systems surrounding and influencing their labor. these are the same loud voices that insist tech isn’t political, while turning important parts of our public and open source tech infrastructure into a Nazi bar.

view more: ‹ prev next ›