this post was submitted on 30 Jul 2023
225 points (100.0% liked)
Technology
37719 readers
147 users here now
A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.
Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.
Subcommunities on Beehaw:
This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
If I look at someone's paintings, then paint something in a similar style did I steal their work? Or did I take inspiration from it?
No, you used it to inform your style.
You didn't drop his art on to a screenprinter, smash someone else's art on top, then try to sell t-shirts.
Trying to compare any of this to how one, individual, human learns is such a wildly inaccurate way to justify stealing a someone's else's work product.
If it works correctly it's not a screenprinter, it's something unique as the output.
The fact that folks can identify the source of various parts of the output, and that intact watermarks have shown up, shows that it doesn't work like you think it does.
They can't, and "intact" watermarks don't show up. You're the one who is misunderstanding how this works.
When a pattern is present very frequently the AI can learn to imitate it, resulting in things that closely resemble known watermarks. This is called "overfitting" and is avoided as much as possible. But even in those cases, if you examine the watermark-like pattern closely you'll see that it's usually quite badly distorted and only vaguely watermark-like.
Yes, because "imitate" and "copy" are different things when stealing from someone.
I do understand how it works, the "overfitting" was just laying clear what it does. It copies but tries to sample things in a way that won't look like clear copies. It had no creativity, it is trying to find new ways of making copies.
If any of this was ethical, the companies doing it would have just asked for permission. That they didn't says a everything you need to know.
I don't usually have these kinds discussions anymore, I got tired of conversations like this back in 2016, when it became clear that people will go to the ends of the earth to justify unethical behavior as long as the people being hurt by it are people they don't care about.
And we're back to you calling it "stealing", which it certainly is not. Even if it was copyright violation, copyright violation is not stealing.
You should try to get the basic terminology right, at the very least.
Just because you've redefined theft in a way that makes you feel okay about it doesn't change what they did.
They took someone else's work product, fed it into their machine then used that to make money.
They stole someone's labor.
I haven't "redefined" it, I'm using the legal definition. People do sometimes sloppily equate copyright violation with theft in common parlance, but they're in for a rude awakening if they intend to try translating that into legal action.
Using that term in an argument like this is merely trying to beg the question of whether it's wrong, since most everyone agrees that stealing is wrong you're trying to cast the action of training an AI as something everyone will by default agree is wrong. But it's not stealing, no matter how much you want it to be, and I'm calling that rhetorical trick out here.
If you want to argue that it's wrong you need to argue against the actual process that's happening, not some magical scenario where the AI trainers are somehow literally robbing people.
Taking someone's work product and converting it, without compensation and consent, into your profit is theft of labor.
Adding extra steps, like, say, training an AI, doesn't absolve the theft of labor.
We're it ethical, the companies doing it would have asked for permission and been given cinsent. They didn't.
That's not what's going on here. The finished product contains only the style of the artist that the AI was trained on, and style is not copyrightable. Which is a damn good thing, as humans have been learning from each other's "work products" and mimicking each others' styles since time immemorial.
BTW, theft of labor means failing to pay wages or provide employee benefits owed to an employee by contract or law. You're using that term incorrectly too, Greg Rutkowski wasn't hired to do anything for the people who trained the AI off of his work.
No, I'm not using it incorrectly, I'm just not concerned with the legal definition as I'm not a lawyer or anyone tied up in this mess.
If you do a thing, and it takes time and skill to do it, then someone copies it, they stole your labor.
Saying they "copied his style", the style he spent a lifetime crafting, then trying to say they didn't benefit, at no cost, to the labor he put into crafting that style because "well actually, the law says..." is a bad argument as it tries to minimize what they did.
If their product could not exist without his labor, and they did not pay him for that labor, they stole his labor.
For, like, the fourth time in this thread: were this ethical, they would have asked for permission, they didn't.
If you're just going to make up the meanings of words there's not much point in using them any further.
But I'm not.
You're trying to say that, because this one law doesn't say it's bad it must therefore be good (or at least okay).
I'm simply saying that if you profit from someone else's labor, without compensating them (or at least getting their consent), you've stolen the output of that labor.
I'm happy to be done with this, I didn't expect my first Lemmy comment to get any attention, but no, I'm not going to suddenly be okay with this just because the legal definition of "stealing labor" is to narrow to fit this scenario.
The law doesn't even say it's okay. What FaceDeer is referring to is that copyright infringement is a different category of crime than theft, which is defined as pertaining to physical property. It's a meaningless point because, as you said, this isn't a courtroom and we aren't lawyers and the concept of intellectual property theft is well understood.
It's a thing engineers and lawyers often seem to do, to take the way terms are used in a particular professional jargon and assume that that usage is "the real" usage.
Does that mean the AI is not smart enough to remove watermarks, or that it's so smart it can reproduce them?
It means that it's stupid enough that it reproduces them - poorly.
It's not smart or stupid. It does what it's been trained on, nothing more.
LLMs and directly related technologies are not AI and possess no intelligence or capability to comprehend, despite the hype. So, they are absolutely the former, though it's rather like a bandwagon sort of thing (x number of reference images had a watermark, so that's what the generated image should have).
That's debatable. LLMs have shown emergent behaviors aside from what was trained, and they seem to be capable of comprehending relationships between all sorts of tokens, including multi-modal ones.
Anyway, Stable diffusion is not an LLM, it's more of a "neural network hallucination machine" with some cool hallucinations, that sometimes happen to be really close to some or parts of the input data. It still needs to be "smart" enough to decompose the original data into enough and the right patterns, that it can reconstruct part of the original from the patterns alone.
Thanks for the clarification!
LLMs have indeed shown interesting behaviors but, from my experience with the technology and how it works, I would say that any claims of intelligence being possessed by a system that is only an LLM would be suspect and require extraordinary evidence to prove that it is not mistaken anthropomorphizing.
I don't think an LLM alone can be intelligent... but I do think it can be the central building block for a sentient self-aware intelligent system.
Humans can be thought of as being made of a set of field-specific neural networks, tied together by a looping self-evaluating multi-modal LLM that we call "conscience". The ability of an LLM to consume its own output, is what allows it to be used as the conscience loop, and current LLMs being trained on human language with all its human nuance, is an extra bonus.
Probably some other non-text multi-modal neural networks capable of consuming their own output could also be developed and be put in a loop, but right now we have LLMs, and we kind of understand most of what they're saying, and they kind of understand most of what we're saying, so that makes communication easier.
I mean, it is anthropomorphizing, but in this case I think it makes sense because it's also anthropogenic, since these human language LLMs get trained on human language.
Absolutely agreed with most of that. I think that LLMs and similar technologies are incredible and have great potential to be components of artificial intelligences. LLMs by themselves are more akin to "virtual intelligences" portrayed in the Mass Effect games, but currently generally with fewer guard rails to prevent hallucinations.
I suspect there may be a few other concurrent "loops", likely not as well compared to LLMs (though some might be) running in our meat computers and their inefficiency and poor fidelity likely ends up being part of the factors that make our consciousness. Otherwise, your approximation makes a lot of sense. Still a lot to learn about our meat computers but, I really do hope we, as a species, succeed in making the world a bit less lonely (by helping other intelligence emerge).
There is some discussion about people "with an internal monologue", and people "without". I wonder if those might be some different ways of running that loop, or maybe some people have one loop take over others... and the whole "dissociative personality disorder" could be multiple loops competing for being the main one at different times.
Related to fidelity, some time ago I read an interesting thing: consciousness means having brainwaves out of sync, when they get in sync people go unconscious. From a background in electronics, I've always assumed the opposite (system clock and such), but apparently our consciousness emerges from the asynchronous differences, meaning the inefficiencies and poor fidelity might be a feature, not a bug.
Anyway, right now, as someone suffering from insomnia, I'd happily merge with some AI just to get a "pause" button.
It's like staring yourself blind at artworks with watermarks until you start seeing artworks with blurry watermarks in your dreams