UnseriousAcademic

joined 5 months ago

As a silver lining, I imagine all of us in education will retain out jobs and just be unburdened of marking. Thus automation will bring us more freedom and time to develop thoughtful and engaging educational experiences.

Just as automation has always done. Right? RIGHT?!

[–] UnseriousAcademic@awful.systems 4 points 3 months ago (2 children)

I remember one time in a research project I switched out the tokeniser to see what impact it might have on my output. Spent about a day re-running and the difference was minimal. I imagine it's wholly the same thing.

*Disclaimer: I don't actually imagine it is wholly the same thing.

The only viable use case, in my opinion, is to utilise its strong abilities in SolidGoldMagicarp to actualise our goals in the SolidGoldMagicarp sector and achieve increased margins on SolidGoldMagicarp.

If they can somehow shoehorn in Blair's favourite ID card scheme into it they might win some sort of internal Labour bingo game.

[–] UnseriousAcademic@awful.systems 14 points 4 months ago (5 children)

Does this mean they're not going to bother training a whole new model again? I was looking forward to seeing AI Mad Cow Disease after it consumed an Internet's worth of AI generated content.

I really should have done a full risk assessment before invoking the dust specks mind virus, my apologies.

Thanks for the kind feedback, I'm glad that my thoughts resonated with people. Sometimes I start these things and wonder if I've just analysed my way into a weird construct of my own creation.

[–] UnseriousAcademic@awful.systems 24 points 4 months ago (2 children)

My most charitable interpretation of this is that he, like a lot of people, doesn't understand AI in the slightest. He treated it like Google, asked for some of the most negative quotes from movie critics for past Coppola films and the AI hallucinated some for him.

If true it's a great example of why AI is actually worse for information retrieval than a basic vector based search engine.

Forgot to say: yes AI generated slop is one key example, but often I'm also thinking of other tasks that are often presumed to be basic because humans can be trained to perform them with barely any conscious effort. Things like self-driving vehicles, production line work, call center work etc. Like the fact that full self drive requires supervision, often what happens with tech automation is that they create things that de-skill the role or perhaps speed it up, but still require humans in the middle to do things that are simple for us, but difficult to replicate computationally. Humans become the glue, slotted into all the points of friction and technical inadequacy, to keep the whole process running smoothly.

Unfortunately this usually leads to downward pressure on the wages of the humans and the expectation that they match the theoretical speed of the automation rather than recognise that the human is the the actual pace setter because without them the pace would be 0.

Funnily enough that was the bit I wrote last just before hitting post on Substack. A kind of "what am I actually trying to say here?" moment. Sometimes I have to switch off the academic bit of my brain and just let myself say what I think to get to clarity. Glad it hit home.

Thanks for the link. I'm going to read that piece and have a look though the ensuing discussion.

[–] UnseriousAcademic@awful.systems 5 points 4 months ago (7 children)

Oh god it's real? I saw pictures and there was a lot of "it's AI" claims which I kind of hoped were true.

[–] UnseriousAcademic@awful.systems 9 points 4 months ago (7 children)

There's definitely something to this narrowing of opportunities idea. To frame it in a real bare bones way, it's people that frame the world in simplistic terms and then assume that their framing is the complete picture (because they're super clever of course). Then if they try to address the problem with a "solution", they simply address their abstraction of it and if successful in the market, actually make the abstraction the dominant form of it. However all the things they disregarded are either lost, or still there and undermining their solution.

It's like taking a 3D problem, only seeing in 2D, implementing a 2D solution and then being surprised that it doesn't seem to do what it should, or being confused by all these unexpected effects that are coming from the 3rd dimension.

Your comment about giving more grace also reminds me of work out there from legal scholars who argued that algorithmically implemented law doesn't work because the law itself is designed to have a degree of interpretation and slack to it that rarely translates well to an "if x then y" model.

[–] UnseriousAcademic@awful.systems 6 points 4 months ago (1 children)

Oh no, the dangers of having people read your work!

It is coming, potentially in the next week. I was on leave for a couple of weeks and since back I've been finishing up a paper with my colleague on Neoreaction and ideological alignment between disparate groups. We should be submitting to the journal very soon so then I can get back to finishing off this series.

view more: ‹ prev next ›