this post was submitted on 10 Aug 2023
31 points (91.9% liked)

ChatGPT

8912 readers
1 users here now

Unofficial ChatGPT community to discuss anything ChatGPT

founded 1 year ago
MODERATORS
 

I already have 2, so I'm getting nervous.

top 16 comments
sorted by: hot top controversial new old
[–] ExclamatoryProdundity@lemmy.world 12 points 1 year ago (1 children)

I think if you want to remove the guard rails you gotta go local. Not that it’s as fast or as good but it’s not creatively stymied. It’s not straight forward and constantly changing though. I was following it for a while until it exploded like a fractal of possibilities. Honestly not sure where it’s at right now but its better every time I take another look at it.

[–] infinity11@infosec.pub 3 points 1 year ago (1 children)

What do you mean by "go local"?

[–] korewa@reddthat.com 8 points 1 year ago* (last edited 1 year ago)

Localllama is a small community now but there is a Reddit one too

Llama is metas ai model that is open sourced the community took it and it keeps improving.

Look at or oobabooga which is a gui to run the models

You need a gaming computer in terms of specs or an m1 apple chip

There are also services that can run it for you on the cloud but privacy isn’t as strong as a completely offline setup.

I use it to run scenarios when it goes off rails that chatgpt doesn’t want to continue out to get some ideas. It’s not as good but it has its own fun factor.

[–] Oyster_Lust@lemmy.world 10 points 1 year ago (2 children)

I'm curious what the warnings were for. I've never gotten a warning, and I didn't know there was such a thing.

[–] infinity11@infosec.pub 11 points 1 year ago* (last edited 1 year ago) (1 children)

I don't remember what the first one was for (it may be something to do with children, but I'm really not sure, this happened about 2 months ago), but the second one may be because I implied the death of a person in a house fire. I think it's a bit unfair, given that it was a fictional scenario being discussed.

[–] APassenger@lemmy.world 5 points 1 year ago

How is the AI supposed to know if the person asking questions has good intentions?

If it provides answers to "hypothetically, how would get away with killing my (fill in blank)?" Then it's told you how to do it.

Now every criminal can add, "hypothetically" to any criminal question.

[–] HelloHotel@lemmy.world 1 points 1 year ago* (last edited 1 year ago)

I got one for feeding the text it wrote back into itself. it took the initiative to make scout from TF2 a demented killer

[–] dejf@lemmy.world 7 points 1 year ago (1 children)

From what I know, the warnings don't trigger any automatic action from OpenAI, so it's an arbitrary amount. You'd probably have to do a lot of policy-breaking stuff before they disabled your account. I wouldn't worry about it.

[–] infinity11@infosec.pub 1 points 1 year ago

Oh, OK, that's cool. Thank you.

[–] gelberhut@lemdro.id 5 points 1 year ago (1 children)

What one should do to get a warning?

[–] infinity11@infosec.pub 4 points 1 year ago (2 children)

I don't know. I read the content policy, and it's quite vague. Basically, trying to make CSAM, trying to violate OpenAI safety features, trying to pass off ChatGPT responses as valid financial, legal or medical advice, or things like that.

[–] Hubi@feddit.de 3 points 1 year ago (1 children)

I once asked chatGPT for a simple Python script to organize files in a directory and received a warning. Still don't know what that was about, but I guess their detection is not perfect.

[–] gelberhut@lemdro.id 2 points 1 year ago
[–] gelberhut@lemdro.id 2 points 1 year ago

All that sounds reasonable, but hardly detectable (it is tricky to find that you try to pass ChatGPT responses as a valid legal advice).

Do you have an idea what you actions could potentially causes the warnings?

Do you have an app which is based on OpenAI API? or use its web chat for yourself?

[–] Floofah@lemmy.world 0 points 1 year ago (1 children)
[–] infinity11@infosec.pub 5 points 1 year ago

ChatGPT: I don't have access to real-time data or OpenAI's specific enforcement policies. However, OpenAI takes content policy violations seriously and may take action based on the severity and frequency of violations. It's best to adhere to their guidelines to avoid any potential consequences.