872
submitted 1 year ago by wiki_me@lemmy.ml to c/opensource@lemmy.ml
you are viewing a single comment's thread
view the rest of the comments
[-] salarua@sopuli.xyz 11 points 1 year ago* (last edited 1 year ago)

shit just went from 0 to 100 real fucking quick

for real though, if you ask an LLM how to make a bomb, it's not the LLM that's the problem

[-] mojo@lemm.ee 0 points 1 year ago* (last edited 1 year ago)

If it has the information, why not? Why should you be restricted by what a company deems appropriate. I obviously picked the bomb example as an extreme example, but that's the point.

Just like I can demonize encryption by saying I should be allowed to secretly send illegal content. If I asked you straight up if encryption is a good thing, you'd probably agree. If I mentioned its inevitable bad use in a shocking manner, would you defend the ability to do that, or change your stance that encryption is bad?

To have a strong stance means also defending the potential harmful effects, since they're inevitable. It's hard to keep values consistent, even when there are potential harmful effects of something that's for the greater good. Encryption is a perfect example of that.

[-] Spzi@lemm.ee 3 points 1 year ago

If it has the information, why not?

Naive altruistic reply: To prevent harm.

Cynic reply: To prevent liabilities.

If the restaurant refuses to put your fries into your coffee, because that's not on the menu, then that's their call. Can be for many reasons, but it's literally their business, not yours.

If we replace fries with fuse, and coffee with gun powder, I hope there are more regulations in place. What they sell and to whom and in which form affects more people than just buyer and seller.

Although I find it pretty surprising corporations self-regulate faster than lawmakers can say 'AI' in this case. That's odd.

[-] mojo@lemm.ee 1 points 1 year ago

This is very well said. They're allowed to not serve you these things, but we should still be able to use these things ourselves and make our glorious gun powder fries coffee with a spice of freedom all we want!

[-] Lionir@beehaw.org 2 points 1 year ago

This is a false equivalence. Encryption only works if nobody can decrypt it. LLMs work even if you censor illegal content from their output.

[-] mojo@lemm.ee 0 points 1 year ago

You miss the point. My point is that if you want to have a consistent view point, you need to acknowledge and defend the harmful sides. Encryption can objectively cause harm, but it should absolutely still be defended.

[-] bear@slrpnk.net 3 points 1 year ago* (last edited 1 year ago)

What the fuck is this "you should defend harm" bullshit, did you hit your head during an entry level philosophy class or something?

The reason we defend encryption even though it can be used for harm is because breaking it means you can't use it for good, and that's far worse. We don't defend the harm it can do in and of itself; why the hell would we? We defend it in spite of the harm because the good greatly outweighs the harm and they cannot be separated. The same isn't true for LLMs.

[-] mojo@lemm.ee 1 points 1 year ago

We don't believe that at all, we believe privacy is a human right. Also you're just objectively wrong about LLMs. Offline uncensored LLMs already exist, and will perpetually exist. We don't defend tools doing harm, we acknowledge it.

[-] bear@slrpnk.net 3 points 1 year ago

We don't believe that at all, we believe privacy is a human right.

That's just a different way to phrase what I said about defending the good side of encryption.

Offline uncensored LLMs already exist, and will perpetually exist

I didn't say they don't exist, I said that the help and harm aren't inseparable like with encryption.

We don't defend tools doing harm, we acknowledge it.

"My point is that if you want to have a consistent view point, you need to acknowledge and defend the harmful sides."

If you want to walk it back, fine, but don't pretend like you didn't say it.

[-] Lionir@beehaw.org 1 points 1 year ago

This is just enlightened centrism. No. Nobody needs to defend the harms done by technology.

We can accept the harm if the good is worth it - we have no need to defend it.

LLMs can work without the harm.

It makes sense to make technology better by reducing the harm they cause when it is possible to do so.

[-] janguv@lemmy.dbzer0.com 1 points 1 year ago

He would have been better off not talking about harm directly but the ability to cause harm; he actually used that wording in an earlier comment in this chain. (Basically strawmanned himself lol.)

Because as a standalone argument for encryption, it's fairly sound – hey, the ability of somebody to cause harm via encrypted messaging channels is the selfsame ability to do good [/prevent spying/protect privacy, whistleblowers/etc], and since the good outweighs the bad, we have to protect the ability to cause harm (sadly).

The problem is it's still disanalogous – the ability to cause harm via LLM use is not the selfsame ability to do good (or to do otherwise what you want). My LLM's refusing to tell me how to make a bomb has no impact on its ability to tell me how to make a pasta bake.

[-] wagesj45@kbin.social -1 points 1 year ago
this post was submitted on 30 Sep 2023
872 points (98.8% liked)

Open Source

30498 readers
60 users here now

All about open source! Feel free to ask questions, and share news, and interesting stuff!

Useful Links

Rules

Related Communities

Community icon from opensource.org, but we are not affiliated with them.

founded 5 years ago
MODERATORS