self

joined 2 years ago
MODERATOR OF
[–] self@awful.systems 8 points 3 weeks ago

just the worst fucking Chrome fork

[–] self@awful.systems 16 points 3 weeks ago

1.2 thousand upvotes for the LLM equivalent of adding a little astrology to your holistic medicine. reddit ain’t ok

[–] self@awful.systems 21 points 4 weeks ago (2 children)

Chrome and Cleopatra issued a statement that that guy was fired because he sucked anyway. Chrome assured fans it would not be an “AI record.”

The plan was apparently for Hout to record a guide vocal that would then be reskinned with the “fakeass robot A.I. Stiv Bators voice.”

there’s something extra disrespectful about a punk band lying about firing the guy they were planning on exploiting to train a horrid tool their record label insisted would make them more money

[–] self@awful.systems 3 points 4 weeks ago

why in the fuck are you back

why in the fuck did you think that bragging about not reading the article was a good move?

oh well, the mysteries of jimmy90 we’ll never find out

but before you go:

you know that joke people make about reddit and lemmy where people don’t read the articles

this isn’t a joke you’re in on, you’re being made fun of. the only joke is how much you don’t get that.

[–] self@awful.systems 17 points 4 weeks ago (2 children)

to be honest, they give me a lot of mtgox vibes:

  • extremely stupid name
  • technically predates the worst excesses of the AI bubble
  • very eager to enable the worst excesses of the AI bubble
[–] self@awful.systems 8 points 4 weeks ago

it is! and “we have no plans to break compatibility” needs to be called out as bullshit every time it’s brought up, because it is a tactic. in the best case it’s a verbal game — they have no plans to maintain compatibility either, so they can pretend these unnecessary breakages are accidental.

I can’t say I see the outcome in the GitHub issue as a positive thing. both redis and the project maintainers have done a sudden 180 in terms of their attitude, and the original proposal is now being denied as a misunderstanding (which it absolutely wasn’t) now that it proved to be unpopular. my guess (from previous experience and the dire warnings in that issue) is that redis is going to attempt the following:

  • take over the project’s governance quietly via proxies
  • once that’s done, engage in a policy where changes that break compatibility with valkey and other redis-likes are approved and PRs to fix compatibility are de-prioritized or rejected outright

if this is the case, it’s a much worse situation than them forking the project — this gets them the outcome they wanted, but curtails the community’s ability to respond to what will happen until it’s far too late.

[–] self@awful.systems 7 points 4 weeks ago

oh of course it’s fucking Axon

[–] self@awful.systems 17 points 4 weeks ago (2 children)

after going closed-source, redis is now doing a matt and trying to use trademark to take control over community-run projects. stay tuned to the end of the linked github thread where somebody spots their endgame

this is becoming a real pattern, and it might deserve a longer analysis in the form of a blog post

[–] self@awful.systems 21 points 4 weeks ago (2 children)

the richest boy in the world sued to stop The Onion from turning infowars into a parody of itself on the grounds that he thinks infowars’ twitter accounts shouldn’t be transferred as part of the bankruptcy even though that’s something that happens constantly and also wouldn’t impact the rest of the bankruptcy proceedings even if it were grounded in anything resembling fact

Musk has also tweeted occasionally that he believes The Onion is not funny.

it’s getting really hard to adequately describe how funny musk isn’t. it’s not just try-hard shit like the weird sink thing, the soul-sucking cameos, or the fact that he’s literally throwing his money into stopping a comedy site from existing — it’s everything taken as a whole. I’d call him anti-comedy, but he’s so much less interesting than that implies

[–] self@awful.systems 6 points 1 month ago (2 children)

a system where you can get priority at traffic lights, so they turn green faster

the US has this too (you can watch the stoplights suddenly reprioritize as an ambulance or cop car with their lightbars and sirens running approaches) and I’m honestly not sure why I haven’t ever seen it abused by some shithead with a HackRF or similar. maybe the penalties make it safer to just willingly run a red light?

[–] self@awful.systems 13 points 1 month ago (1 children)

other than interop, the big problem I have with this is security. car modding for performance is already a big thing, and a car mod that makes other cars slow down, stop, get out of your way, or otherwise malfunction would be incredibly popular with assholes of all varieties, and car modding has many. the current state of automotive is that security is a fucking shitshow, but I can’t figure out any kind of security model for this that isn’t vulnerable to a wide variety of obvious attacks. even a perfect inter-vendor attestation chain (good fucking luck) is vulnerable to hooking an ECU (or whatever the ruggedized monitoring microcontroller unit for a magic self-driving EV is) and radio up to a variety of fake sensors and crafting inputs such that the thing starts transmitting “wait no stop here” signals to all the surrounding cars

but then again, all of this is probably intentional because it creates a privileged class of people who can afford to fuck with self-driving car networking and not worry about any associated fines, and an unprivileged class who just have to put up with everything being so much worse. in a world where you can roll smoke into a Subway with relatively few consequences (not to mention all the other horseshit Truck Guys get away with), it’s not a hard outcome to imagine.

[–] self@awful.systems 9 points 1 month ago

imagine having an opinion

 

see also the github thread linked in the mastodon post, where the couple of gormless AI hypemen responsible for MDN’s AI features pick a fight with like 30 web developers

from that thread I’ve also found out that most MDN content is written by a collective that exists outside of Mozilla (probably explaining why it took them this long to fuck it up), so my hopes that somebody forks MDN are much higher

 

there’s a fun drinking game you can play where you take a shot whenever the spec devolves into flowery nonsense

§1. Purpose and Scope

The purpose of DIDComm Messaging is to provide a secure, private communication methodology built atop the decentralized design of DIDs.

It is the second half of this sentence, not the first, that makes DIDComm interesting. “Methodology” implies more than just a mechanism for individual messages, or even for a sequence of them. DIDComm Messaging defines how messages compose into the larger primitive of application-level protocols and workflows, while seamlessly retaining trust. “Built atop … DIDs” emphasizes DIDComm’s connection to the larger decentralized identity movement, with its many attendent virtues.

you shouldn’t have pregamed

 

today Mozilla published a blog post about the AI Help and AI Explain features it deployed to its famously accurate MDN web documentation reference a few days ago. here’s how it’s going according to that post:

We’re only a handful of days into the journey, but the data so far seems to indicate a sense of skepticism towards AI and LLMs in general, while those who have tried the features to find answers tend to be happy with the results.

got that? cool. now let’s check out the developer response on github soon after the AI features were deployed:

it seems like this feature was conceived, developed, and deployed without even considering that an LLM might generate convincing gibberish, even though that's precisely what they're designed to do.

oh dear

That is demonstrably wrong. There is no demo of that code showing it in action. A developer who uses this code and expects the outcome the AI said to expect would be disappointed (at best).

That was from the very first page I hit that had an accessibility note. Which means I am wary of what genuine user-harming advice this tool will offer on more complex concepts than simple stricken text.

So the "solution" is adding a disclaimer and a survey instead of removing the false information? 🙃 🙃 🙃

This response is clearly wrong in its statement that there is no closing tag, but also incorrect in its statement that all HTML must have a closing tag; while this is correct for XHTML, HTML5 allows for void elements that do not require a closing tag

that doesn’t sound very good! but at least someone vetted the LLM’s answers, right?

MDN core reviewer/maintainer here.

Until @stevefaulkner pinged me about this (thanks, Steve), I myself wasn’t aware that this “AI Explain” thing was added. Nor, as far as I know, were any of the other core reviewers/maintainers aware it’d been added. Nor, as far as I know, did anybody get an OK for this from the MDN Steering Committee (the group of people responsible for governance of MDN) — nor even just inform the Steering Committee about it at all.

The change seems to have landed in the sources two days ago, in e342081 — without any associated issue, instead only a PR at #9188 that includes absolutely not discussion or background info of any kind.

At this point, it looks to me to be something that Mozilla decided to do on their own without giving any heads-up of any kind to any other MDN stakeholders. (I could be wrong; I've been away a bit — a lot of my time over the last month has been spent elsewhere, unfortunately, and that’s prevented me from being able to be doing MDN work I’d have otherwise normally been doing.)

Anyway, this “AI Explain” thing is a monumentally bad idea, clearly — for obvious reasons (but also for the specific reasons that others have taken time to add comments to this issue to help make clear).

(note: the above reply was hidden in the GitHub thread by Mozilla, usually something you only do for off topic replies)

so this thing was pushed into MDN behind the backs of Mozilla’s experts and given only 15 minutes of review (ie, none)? who could have done such a thing?

…so anyway, some kind of space alien comes in and locks the thread:

Hi there, 👋

Thank you all for taking the time to provide feedback about our AI features, AI Explain and AI Help, and to participate in this discussion, which has probably been the most active one in some time. Congratulations to be a part of it! 👏

congratulations to be a part of it indeed

 

hopefully this is alright with @dgerard@awful.systems, and I apologize for the clumsy format since we can’t pull posts directly until we’re federated (and even then lemmy doesn’t interact the best with masto posts), but absolutely everyone who hasn’t seen Scott’s emails yet (or like me somehow forgot how fucking bad they were) needs to, including yud playing interference so the rats don’t realize what Scott is

 

there’s just so much to sneer at in this thread and I’ve got choice paralysis. fuck it, let’s go for this one

everyone thinking Prompt Engineering will go away dont understand how close Prompt Engineering is to management or executive communications. until BCI is perfect, we'll never be done trying to serialize our intent into text for others to consume, whether AI or human.

boy fuck do I hate when my boss wants to know how long a feature will take, so he jacks straight into my cerebral cortex to send me email instead of using zoom like a normal person

 

it’s a short comment thread so far, but it’s got a few posts that are just condensed orange site

The constant quest for "safety" might actually be making our future much less safe. I've seen many instances of users needing to yell at, abuse, or manipulate ChatGPT to get the desired answers. This trains users to be hateful to / frustrated with AI, and if the data is used, it teaches AI that rewards come from such patterns. Wrote an article about this -- https://hackernoon.com/ai-restrictions-reinforce-abusive-user-behavior

But you think humans (by and large) do know what "facts" are?

 

one of hn’s core demographics (windbag grifters) fights with a bunch of skeptics over whether it’s a bad thing the medicine they’re selling is mostly cocaine and alcohol

 

linked to the orange site because there's a funny contrast in the comments between paully's fans who think they've just read the greatest thing imaginable and paully's more jaded fans who want to know why he's posting this when the industry's entering a downturn

view more: ‹ prev next ›