this post was submitted on 28 Oct 2024
36 points (100.0% liked)

TechTakes

1374 readers
62 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 1 year ago
MODERATORS
 

Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

Last week's thread

(Semi-obligatory thanks to @dgerard for starting this)

(page 3) 50 comments
sorted by: hot top controversial new old
[–] BigMuffin69@awful.systems 12 points 6 days ago* (last edited 6 days ago) (10 children)

I know it's Halloween, but this popped up in my feed and was too spooky even for me 😱

As a side note, what are peoples feelings about Wolfram? Smart dude for sho, but some of the shit he says just comes across as straight up pseudoscientific gobbledygook. But can he out guru Big Yud in a 1v1 on Final Destination (fox only, no items) ? 🤔

[–] swlabr@awful.systems 10 points 6 days ago (1 children)

I mean yud is only really a guru on his own farts

[–] froztbyte@awful.systems 9 points 5 days ago

classic warning case of someone getting high on their own supply

load more comments (9 replies)
[–] swlabr@awful.systems 11 points 6 days ago (1 children)

https://www.infoworld.com/article/3595687/googles-flutter-framework-has-been-forked.html/

I’m currently using Flutter. It’s good! And useful! Much better than AI. It being mostly developed by Google has been a bit of a worry since Google is known to shoot itself in the foot by killing off its own products.

So while it’s no big deal to have an open source codebase forked, just wanted to highlight this part of the article:

Carroll also claimed that Google’s focus on AI caused the Flutter team to deprioritize desktop platforms, and he stressed the difficulty of working with the current Flutter team

Described as “Flutter+” by Carroll, Flock “will remain constantly up to date with Flutter, he said. Flock will add important bug fixes, and popular community features, which the Flutter team either can’t, or won’t implement.”

I hope this goes well!

[–] froztbyte@awful.systems 7 points 5 days ago (1 children)

that android project of some months was a venture into flutter (and haven’t touched it before)

I had similar impressions on some things, and mixed on other

dart’s a moderately good language with some nice primitive, tooling overall is pretty mature, broad strokes works well for variant targeting and shit

libraries though holy shit the current situation (then) was a mess. one minor flutter sdk upgrade and a whole bunch of things just exploded (serialisation things in nosql-type libraries I tried to use for the ostensible desired magic factor (just went back to sqlite stuff again after)). this can’t have been due to sdk drift alone, and felt like an iceberg problem

and then the documentation: fucking awful, for starting. excellent as technical documentation once you grok shit but before that all the examples and things are terrible. lots of extremely important details hidden in single mentions in offhand sentences in places that if you don’t happen to be looking at that exact page good luck finding it. this, too, felt like inadequate care and attention by el goog

I imagine if one is working with this every day you know the lay of the land and where to avoid stepping into holes, but wow was I surprised at how much it was possible to rapidly rakestep, given what the language pitches as

[–] swlabr@awful.systems 7 points 5 days ago (1 children)

Yes to be clear when I say flutter is “good” I deliberately avoided a definition of “good”. I find it… usable.

[–] froztbyte@awful.systems 6 points 5 days ago

yep yep - didn’t mean to argue with your post inasmuch as to fill in details to the fork, but I guess I could’ve been clearer about that

[–] o7___o7@awful.systems 13 points 6 days ago* (last edited 6 days ago)

Cursed .gov link:

https://www.state.gov/secretary-antony-j-blinken-at-the-advancing-sustainable-development-through-safe-secure-and-trustworthy-ai-event/

TL;DR: Our main characters have bilked a very credulous US State Department. 100 Million tax dollars will now be converted into entropy. There will also be committees.

[–] o7___o7@awful.systems 14 points 6 days ago* (last edited 6 days ago) (1 children)

Go home Coursera, you're drunk.

Want to get even better results with GenAI? The new Google Prompting Essentials course will teach you 5 easy steps to write effective prompts for consistent, useful results.

Note: Got an email ad from Coursera. I had to highlight the message because the email's text was white-on-white.

How the chicken fried fuck does anyone make a course about "prompt engineering"? It's like seeing a weird sports guy systematize his pregame rituals and then sell a course on it.

Step 1: Grow a beard, preferably one like that Leonidas guy in 300.

Step 2: If your team wins, never wash those clothes, and be sure to wear those clothes every game day. That's not stank, that's the luck diffusing out into the universe.

Step 3: Use the force to make the ball go where it needs to go. Also use it to scatter and confuse the opposition.

Step 4: Ask God(s) to intervene, he/she/they love(s) your team more!

Step 5: Change allegiance to a better team if things go downhill, because that means your current team has lost the Mandate of Heaven.

That will be $200 please.

[–] bitofhope@awful.systems 8 points 6 days ago

Thanks, Google. You know, I used to be pretty good at getting consistent, useful results from your search engine, but the improvements you've made to it since the make me feel like I really might need a fucking prompt engineering course to find things on the internet these days. By which I mean something that'll help you promptly engineer the internet back into a form where search engines work correctly.

[–] BlueMonday1984@awful.systems 14 points 6 days ago (1 children)

Jingna Zhang found an AI corp saying the quiet part out loud:

In a previous post of mine, I noted how the public generally feels that the jobs people want to do (mainly creative jobs) are the ones being chiefly threatened by AI, with the dangerous, boring and generally garbage jobs being left relatively untouched.

Looking at this, I suspect the public views anyone working on/boosting AI as someone who knows full well their actions are threatening people's livelihoods/dream jobs, and is actively, willingly and intentionally threatening them, either out of jealousy for those who took the time to develop the skills, or out of simple capitalist greed.

[–] o7___o7@awful.systems 10 points 6 days ago* (last edited 6 days ago) (3 children)

I thought the Raytheon ads for tanks and knife missiles in the Huntsville, AL airport were bad, but this takes the whole goddamn cake.

[–] BlueMonday1984@awful.systems 10 points 6 days ago (2 children)

Raytheon can at least claim they're helping kill terrorists or some shit like that, Artisan's just going out and saying "We ruin good people's lives for money, and we can help you do that too"

[–] bitofhope@awful.systems 10 points 6 days ago

Grift tech that claims to do awful shit that ruins everyone's lives, but really just makes Stanford grads sit around pretending to invent something while funneling VC money directly in their bloodstreams.

You'd think these would overflow the evil scale and end up back into being ethical but really they're just doing the same thing as the non-vaporware evil companies with just some extra steps.

[–] o7___o7@awful.systems 7 points 6 days ago* (last edited 6 days ago)

Right? At least it the knife missile does what it says on the tin.

Apologies in advance for the Rick and Morty reference, but Artisan seems to be roughly congruent to "Simple Rick" candy bars.

The (poorly executed) distillation of the life's work of actually talented and interesting people, sold as a direct replacement, to fill a void that the customer doesn't even know exists.

[–] blakestacey@awful.systems 6 points 6 days ago

Ah, Huntsville. Where the downtown convention hall is the Werner von Braun Center.

🎶 the man whose allegiance is ruled by expedience 🎶

[–] skillissuer@discuss.tchncs.de 6 points 6 days ago* (last edited 6 days ago) (2 children)

You don't get it, this is a likely bribe

[–] s3p5r@lemm.ee 4 points 5 days ago (7 children)

Help me out, the coffee isn't working today and I still don't get it. How does bribery fit in?

load more comments (7 replies)
[–] o7___o7@awful.systems 5 points 6 days ago (1 children)

yikes, good call! I couldn't get past the Borderlands 2 vibes, but you're right.

[–] skillissuer@discuss.tchncs.de 6 points 6 days ago

it's like generalized manufacturing consent

[–] antifuchs@awful.systems 12 points 6 days ago (1 children)

Microsoft found a fitting way to punish AI for collaborating with SEO spammers in generating slop: make it use the GitHub code review tools. https://github.blog/changelog/2024-10-29-refine-and-validate-code-review-suggestions-with-copilot-workspace-public-preview/

[–] self@awful.systems 13 points 6 days ago (7 children)

we really shouldn’t have let Microsoft both fork an editor and buy GitHub, of course they were gonna turn one into a really shitty version of the other

anyway check this extremely valuable suggestion from Copilot in one of their screenshots:

The error message 'userld and score are required' is unclear. It should be more specific, such as 'Missing userld or score in the request body'.

aren’t you salivating for a Copilot subscription? it turns a lazy error message into… no that’s still lazy as shit actually, who is this for?

  • a human reading this still needs to consult external documentation to know what userId and score are
  • a machine can’t read this
  • if you’re going for consistent error messages or you’re looking to match the docs (extremely likely in a project that’s in production), arbitrarily changing that error so it doesn’t match anything else in the project probably isn’t a great idea, and we know LLMs don’t do consistency

@self did somebody make an extension that replaces github copilot with ELIZA yet

load more comments (6 replies)
[–] veganes_hack@feddit.org 14 points 6 days ago (1 children)

Zuck says lots more slop coming your way soon

"I think were going to add a whole new category of content which is AI generated or AI summarized content, or existing content pulled together by AI in some way,” the Meta CEO said. “And I think that that’s gonna be very exciting for Facebook and Instagram and maybe Threads, or other kinds of feed experiences over time."

Facebook is already one Meta platform where AI generated content, sometimes referred to as “AI slop,” is increasingly common.

[–] froztbyte@awful.systems 8 points 6 days ago

mm I wonder what kind of content they'll want to use that for

[–] khalid_salad@awful.systems 11 points 6 days ago* (last edited 6 days ago) (1 children)

Is there a group that more consistently makes category errors than computer scientists? Can we mandate Philosophy 101 as a pre-req to shitting out research papers?

Edit: maybe I need to take a break from Mystery AI Hype Theater 3000.

[–] FredFig@awful.systems 5 points 6 days ago

Chat-GPT-TFSD-21guns can have a little anthropomorphism, as a treat.

  • Sam Altman, probably
[–] swlabr@awful.systems 25 points 1 week ago* (last edited 1 week ago) (7 children)

Kicking off the sack with something light.

load more comments (7 replies)
load more comments
view more: ‹ prev next ›