the F22 is the C++ of military planes and I mean that in the most derogatory way possible
where the fuck is his left foot
also, did Scott fuck up and not notice this one is plagiarism? not just of the original painting, but however many art class portraiture recreations of the painting the model trained on
most likely including a particularly awful one I was behind the camera for! but because it’s art class and not assholes doing plagiarism, the point of the exercise isn’t that it’s original or even good (and under no circumstances are you pretending you came up with an original work) — it’s to explore the elements that made the original good. I remember we put a lot of effort into getting the light and shadow around the left shoulder and head just right, which are elements the generative knockoff just entirely fucks up because of course it does (also what the fuck is on his forehead?)
my strong impression is that surveillance advertising has been an unmitigated disaster for the ability to actually sell products in any kind of sensible way — see also the success of influencer marketing, under the (utterly false) pretense that it’s less targeted and more authentic than the rest of the shit we’re used to
but marketing is an industry run by utterly incompetent morally bankrupt fuckheads, so my impression is also that none of them particularly know or care that the majority of what they’re doing doesn’t work; there’s power in surveillance and they like that feeling, so the data remains extremely valuable on the market
fucking imagine coming back to a place you’re not welcome with this “eeehhh you’re being a bit aggressive tbh” shit
I think your response is a bit aggressive TBH.
nah, an aggressive response is me telling you to fuck yourself as I ban you for a second(!) time for making these exact terrible fucking posts
I’ve saved many hours of work with it, in languages I don’t really even know.
maybe by next ban you’ll figure out why your PRs keep getting closed
congrats on asking jeeves
is it bad to be bad at a system designed for exploitation? maybe your grandma had a point
what’s wild is in the ideal case, a person who really doesn’t have anything to hide is both unimaginably dull and has effectively just confessed that they would sell you out to the authorities for any or no reason at all
people with nothing to hide are the worst people
maybe it was a mistake to lionize a corporate monopolist to the level where we ostracized people for not being “good” at using their trap of a product
the marketing fucks and executive ghouls who came up with this meme (that used to surface every time I talked about wanting to de-Google) are also the ones who make a fuckton of money off of having a real-time firehose of personal data straight from the source, cause that’s by far what’s most valuable to advertisers and surveillance firms (but I repeat myself)
the linked Buttondown article deserves highlighting because, as always, Emily M Bender knows what’s up:
If we value information literacy and cultivating in students the ability to think critically about information sources and how they relate to each other, we shouldn't use systems that not only rupture the relationship between reader and information source, but also present a worldview where there are simple, authoritative answers to questions, and all we have to do is to just ask ChatGPT for them.
(and I really should start listening to Mystery AI Hype Theater 3000 soon)
also, this stood out, from the OpenAI/Common Sense Media (ugh) presentation:
As a responsible user, it is essential that you check and evaluate the accuracy of the outputs of any generative AI tool before you share it with your colleagues, parents and caregivers, and students. That includes any seemingly factual information, links, references, and citations.
this is such a fucked framing of the dangers of informational bias, algorithmic racism, and the laundering of fabricated data through the false authority of an LLM. framing it as an issue where the responsible party is the non-expert user is a lot like saying “of course you can diagnose your own ocular damage, just use your eyes”. it’s very easy to perceive the AI as unbiased in situations where the bias agrees with your own, and that is incredibly dangerous to marginalized students. and as always, it’s gross how targeted this is: educators are used to being the responsible ones in the room, and this might feel like yet another responsibility to take on — but that’s not a reasonable way to handle LLMs as a source of unending bullshit.
Lack of familiarity with AI PCs leads to what the study describes as "misconceptions," which include the following: 44 percent of respondents believe AI PCs are a gimmick or futuristic; 53 percent believe AI PCs are only for creative or technical professionals; 86 percent are concerned about the privacy and security of their data when using an AI PC; and 17 percent believe AI PCs are not secure or regulated.
ah yeah, you just need to get more familiar with your AI PC so you stop caring what a massive privacy and security risk both Recall and Copilot are
lol @ 44% of the study’s participants already knowing this shit’s a desperate gimmick though
holy shit. I learned a new term today (resentment-based marketing) and I’m fascinated by how this could possibly work