this post was submitted on 22 Sep 2024
34 points (80.4% liked)

Programming

17686 readers
123 users here now

Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!

Cross posting is strongly encouraged in the instance. If you feel your post or another person's post makes sense in another community cross post into it.

Hope you enjoy the instance!

Rules

Rules

  • Follow the programming.dev instance rules
  • Keep content related to programming in some way
  • If you're posting long videos try to add in some form of tldr for those who don't want to watch videos

Wormhole

Follow the wormhole through a path of communities !webdev@programming.dev



founded 2 years ago
MODERATORS
 

I figured out how to remove most of the safeguards from some AI models. I don't feel comfortable sharing that information with anyone. I have come across a few layers of obfuscation to make this type of alteration more difficult to find and sort out. This caused me to realize, a lot of you are likely faced with similar dilemmas of responsibility, gatekeeping, and manipulating others for ethical reasons. How do you feel about this?

you are viewing a single comment's thread
view the rest of the comments
[–] talkingpumpkin@lemmy.world 6 points 3 months ago* (last edited 3 months ago) (1 children)

I don't see the ethics implications of sharing that? What would happen if you did disclose your discoveries/techniques?

I don't know much about LLMs, but doesn't removing these safeguards just make the model as a whole less useful?

[–] j4k3@lemmy.world 2 points 3 months ago (1 children)

Diffusion is the issue not text gen

[–] DarkCloud@lemmy.world 4 points 3 months ago* (last edited 3 months ago)

There's already censorship free versions of stable diffusion available. You can run it on your own computer for free.