this post was submitted on 10 Aug 2023
3 points (100.0% liked)

TechTakes

1490 readers
32 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
 

Pretty soon, paying for all the APIs you need to make sure your Midjourney images are palatable will be enough to pay a human artist!

top 17 comments
sorted by: hot top controversial new old
[–] kuna@awful.systems 1 points 1 year ago (1 children)

I always feel a culture shock after seeing someone claim "we use $AI in production at $COMPANY", after mostly seeing these used for silly shit like Neco Arc covering System of a Down.

[–] bitofhope@awful.systems 1 points 1 year ago (1 children)

As much as I hate handing it to the plagiarism machines, that Neco Arc track is a banger.

[–] kuna@awful.systems 1 points 1 year ago (1 children)

plagiarism machines

The AI-generated parts of these kind of "covers" generally have exactly two easily identifiable parts - the original singer's melody and intonation, combined with the voice of an actor playing the cartoon character. While it's done without the consent of either, I still think it falls under mashup/parody, with the AI magic making the whole thing sound better for relatively less effort. It's not like using a comittee of uncredited artist to make something entirely new, and then pass it as one's own.

[–] bitofhope@awful.systems 1 points 1 year ago

I agree and don't think the cover itself is objectionable. Covers and remixes are a valuable and essential part of musical tradition and this is a good piece of art even though it's derived from the work of Yuzuki Ryōka and SOAD without approval.

The "plagiarism machines" part is about the way machine learning companies use unpaid and underpaid labor of millions and pass that off as the work of a quasi-sentient machine, often with the explicit aim of replacing the countless creative workers whose work was used to build the mashup machine. I think I'm actually pretty anti-copyright but at least the rules should also apply when it's a tech corp cribbing from masses of proles.

A cartoon cat singing a nu-metal song about ADHD while sounding like Marianne Faithfull with a cold is the least we deserve in return.

[–] swlabr@awful.systems 1 points 1 year ago (1 children)

bro just one more AI company bro, bro I swear just one more AI company and we'll reach singularity bro

Unrelatedly I just unlocked a core memory: I learned about the "signularity" (sic, misspelled as a joke) from the webcomic questionable content. Why do I remember that?

[–] froztbyte@awful.systems 1 points 1 year ago* (last edited 1 year ago) (1 children)

speaking of which: sweep.dev

Sweep's Core Algorithm - Using Retrieval Augmented Generation (RAG) to clear your GitHub Backlog

At Sweep, our core issue-to-pull-request pipeline resolves around an RAG-based pipeline. This means we retrieve snippets from a corpora (your codebase) and feed it to an language model (GPT-4) to generate code.

a wondrous automaton crafting branch and PR for every badly-written and/or outdated entry in your n-many github repositories! with in-built violation-of-boundaries!

true perfection, this will go absolutely perfectly. I have no notes.

[edit: this should possibly even be its own thread]

[–] gerikson@awful.systems 1 points 1 year ago (1 children)

What the fuck? Is that a parody? People are mocking dumb LLM ideas, right? Right?!

[–] froztbyte@awful.systems 1 points 1 year ago (1 children)

I certainly am mocking this, but that thing appears to be a whole-ass startup.

[–] gerikson@awful.systems 1 points 1 year ago* (last edited 1 year ago) (1 children)

I mean, my background in support is showing here... if an issue was submitted that was clear enough for someone to fix it, it would be fixed[1] (i.e. "there's a bug in file X, line Y, the comparison is strictly less than but should be less or equal"). By definition, the ones that are left are those that are unclear, nonsensical, off-topic or all too often all 3.


[1] modulo developer laziness

[–] froztbyte@awful.systems 0 points 1 year ago (1 children)

There’s also “used to matter, but then things changed and we just never got to this because it’s 200+ things deep in the backlog” etc kind of situations

Anyway, I look forward to hearing about someone using this and something going catastrophically wrong

[–] gerikson@awful.systems 1 points 1 year ago

I can replace this product with a script that just closes them as "works as designed WONTFIX"

[–] self@awful.systems 1 points 1 year ago (1 children)

it goes without saying that none of the hackermen point out that if it were as simple as applying a CV model to the generated output, the company generating the output would already be doing it (and I’ve given up on the idea that a hacker news poster could begin to analyze why that is). however:

A meta question for y'all if you're willing to share. I've seen y'all launch what I think is now 5 different ideas under the same name? I want to say that I remember a platform for musicians, and a platform to automatically convert a codebase into hostable inference servers, among a few other things.

this feels like a con to me, or at least an indicator that these folks have no idea what they want to do. a post with that kind of tone would most likely get flagged dead by the mods, though, so the orange site remains a target-rich environment if you’re looking for folk who might invest in your scam/bad idea

[–] 200fifty@awful.systems 1 points 1 year ago (1 children)

there actually is a comment making this point now:

Isn't this product kind of impossible? Like a compression program that compresses compressed files? If you have an algorithm for determining whether a generated image is good or bad couldn't the same logic be incorporated into the network so that it doesn't generate bad images?

the reply is a work of art:

We’re optimistic about using our own algorithms and models to evaluate another model. In theoretical computer science, it is easier to verify a correct solution than to generate a correct solution (P vs NP problem).

it's not even wrong, as they say

[–] self@awful.systems 1 points 1 year ago (2 children)

In theoretical computer science, it is easier to verify a correct solution than to generate a correct solution (P vs NP problem).

wh-what? I — there’s just so much wrong. is this the maximum information density of wrongness? the shortest string that encodes the most incorrect information? have these absolute motherfuckers just invented and then solved inverse Kolmogorov complexity?

[–] dgerard@awful.systems 1 points 1 year ago (1 children)

that sounds like an oversimplification that was further oversimplified by at least two editors

[–] gerikson@awful.systems 1 points 1 year ago

They used an LLM.

[–] froztbyte@awful.systems 1 points 1 year ago* (last edited 1 year ago)

interestingly the replies have the same kind of tone that you often see in cryptographic kookery, so that's another strong warning signal