self

joined 2 years ago
MODERATOR OF
[–] self@awful.systems 16 points 10 months ago

one of the data fluffers solemnly logs me as “did not finish” as I flee the orgy

[–] self@awful.systems 10 points 10 months ago* (last edited 10 months ago) (1 children)

dear fuck I found their card database, which doesn’t seem to be linked from their main page (and which managed to crash its tab as soon as I clicked on the link to see all the cards spread out, because lazy loading isn’t real):

e: somehow the cards get less funny the higher the funny rating goes

e2: there’s no punchability rating but it’s desperately needed

[–] self@awful.systems 8 points 10 months ago

every card is a valuable lesson in how insufferable the Rationalists are

[–] self@awful.systems 7 points 10 months ago

oh that is damning

[–] self@awful.systems 10 points 10 months ago (2 children)

I propose a SneerClub game night where all we do is play rounds of this thing until we can’t handle any more of it

[–] self@awful.systems 13 points 10 months ago (18 children)

I learned about this because someone dragged a copy to aella’s birthday orgy and it showed up in one of the photos, but the rationalists have a cards against humanity clone and it looks godawful

[–] self@awful.systems 10 points 10 months ago (1 children)

this is just an increasingly desperate Seto Kaiba taking to the internet because yu-gi-boy pointed out his AI-generated Duel Monsters deck does not have the heart of the cards, mostly because the LLM doesn’t understand probability, but he’s in too deep with the Kaiba Corp board to admit it

[–] self@awful.systems 63 points 10 months ago (4 children)

for anyone who wants to increase Amazon’s GPT bill by generating dildo limericks, it looks like this is only enabled for Amazon’s app, not their website

[–] self@awful.systems 26 points 10 months ago

how could the AI companies have seen this coming? it’s not like everyone has been loudly warning them about this specific danger in the exact technology they’re selling since at least the paper Google decided to fire Timnit over instead of listen to, if not longer

these technofascist fucks love when their plans look like accidents, but this is the exact shit that LLMs were built to accomplish. I don’t expect any real improvement, because deniably influencing elections is the kind of power fascists dream about

[–] self@awful.systems 10 points 10 months ago* (last edited 10 months ago)

ah yes, the type of nuance that can’t survive even the extremely mild amount of pushback you’ve experienced in this thread. but since we’re “fairly hostile” and all that, how about I make sure your lying AI-pushing ass can’t show up in any of our threads again

I should’ve known taking my time to explain our stance was a waste of my fucking time when you brought up nuance in the first place — the only time I see you shitheads give a fuck about that is when you’re looking to shift the Overton window while pretending to take a centrist position

[–] self@awful.systems 8 points 10 months ago (2 children)

who was this post for

[–] self@awful.systems 13 points 10 months ago
  1. It will get better, and in the case of language models, that could have profound impacts on society

why is that a given?

the materials research Deepmind published

these results were extremely flawed and disappointing, in a way that’s highly reminiscent of the Bell Labs replication crisis

  1. There are other things being researched that are equally important that get little daylight, such as robotics and agentic AIs

these get brought up a lot in marketing, but the academic results of attempting to apply LLMs and generative AI to these fields have also been extremely disappointing

if you’re here seeking nuance, I encourage you to learn more about the history of academic fraud that occurred during the first AI boom and led directly to the AI winter. the tragedy of AI as a field is that all of the obvious fraud is and was treated with the same respect as the occasional truly useful computational technique

I also encourage you to learn more about the Rationalist cult that steers a lot of decisions around AI (and especially AI with an AGI end goal) research. the communities on this instance have a long history of sneering at the Rationalists who would (years later) go on to become key researchers at essentially every large AI company, and that history has shaped the language we use. the podcast Behind the Bastards has a couple of episodes about the Rationalist cult and its relationship with AI research, and Robert Evans definitely does a better job describing it than I can

view more: ‹ prev next ›