this post was submitted on 21 Jan 2024
826 points (95.0% liked)

Technology

59358 readers
6604 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] pavnilschanda@lemmy.world 45 points 9 months ago (2 children)

Apparently people who specialize in AI/ML have a very hard time trying to replicate the desired results when training models with 'poisoned' data. Is that true?

[–] Even_Adder@lemmy.dbzer0.com 42 points 9 months ago* (last edited 9 months ago) (3 children)

I've only heard that running images through a VAE just once seems to break the Nightshade effect, but no one's really published anything yet.

You can finetune models on known bad and incoherent images to help it to output better images if the trained embedding is used in the negative prompt. So there's a chance that making a lot of purposefully bad data could actually make models better by helping the model recognize bad output and avoid it.

[–] watersnipje@lemmy.blahaj.zone 11 points 9 months ago (2 children)
[–] Batman@lemmy.world 10 points 9 months ago (1 children)

Think they mean a Variational AutoEncoder

[–] KeenFlame@feddit.nu 2 points 9 months ago

Variable. But no running it through that will not break any effect

[–] General_Effort@lemmy.world 9 points 9 months ago (1 children)

A Variational AutoEncoder is a kind of AI that can be used to compress data. In image generators, a VAE is used to compress the images. The actual image AI works on the smaller, compressed image (the latent representation), which means it takes a less powerful computer (and uses less energy). It's that which makes it possible to run Stable Diffusion at home.

This attack targets the VAE. The image is altered so that the latent representation looks like a very different image, but still roughly the same to humans. The actual image AI works on a different image. Obviously, this only works if you have the right VAE. So, it only works against open source AI; basically only Stable Diffusion at this point. Companies that use a closed source VAE cannot be attacked.

I guess it makes sense if your ideology is that information must be owned and everything should make money for someone. I guess some people see cyberpunk dystopia as a desirable future. It doesn't seem to be a very effective attack but it may have some long-term PR effect. Training an AI costs a fair amount of money. People who give that away for free probably still have some ulterior motive, such as being liked. If instead you get the full hate of a few anarcho-capitalists that threaten digital vandalism, you may be deterred. Well, my two cents.

[–] watersnipje@lemmy.blahaj.zone 3 points 9 months ago

Thank you for explaining. I work in NLP and are not familiar with all CV acronyms. That sounds like it kind if defeats the purpose if it only targets open source models. But yeah, makes sense that you would need the actual autoencoder in order to learn how to alter your data such that the representation from the autoencoder is different enough.

[–] sukhmel@programming.dev 5 points 9 months ago

So there's a chance that making a lot of purposefully bad data could actually make models better by helping the model recognize bad output and avoid it.

This would be truly ironic

[–] HelloHotel@lemmy.world 2 points 9 months ago* (last edited 9 months ago)

If users have verry much control and we can coordinate then you could gaslight the AI into a screwed up alternate reality

[–] Miaou@jlai.lu 14 points 9 months ago (1 children)

Until they come with some preprocessing step, or some better feature extractors etc. This is an arms race like there are many of

[–] Schmeckinger@lemmy.world 4 points 9 months ago

The thing is data poisoning is a arms race that the Ai side will win with ease. You can either solve it with pre processing or filtering. All it does is make the images look worse. I can't think of a way that you can poison data that doesn't take more effort to unpoison than to poison.