this post was submitted on 05 Sep 2023
35 points (92.7% liked)
Privacy
31991 readers
623 users here now
A place to discuss privacy and freedom in the digital world.
Privacy has become a very important issue in modern society, with companies and governments constantly abusing their power, more and more people are waking up to the importance of digital privacy.
In this community everyone is welcome to post links and discuss topics related to privacy.
Some Rules
- Posting a link to a website containing tracking isn't great, if contents of the website are behind a paywall maybe copy them into the post
- Don't promote proprietary software
- Try to keep things on topic
- If you have a question, please try searching for previous discussions, maybe it has already been answered
- Reposts are fine, but should have at least a couple of weeks in between so that the post can reach a new audience
- Be nice :)
Related communities
Chat rooms
-
[Matrix/Element]Dead
much thanks to @gary_host_laptop for the logo design :)
founded 5 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Why would I use this ChatGPT thing when I can self-host Llama 2 or Falcon, which is free and open source?
I’m a bit out of the loop with LLMs but it depends on what you’re doing.
Last I heard, you’re going to want to use a 65b or 70b model if you want something that runs as good as GPT 3.5 but good luck with getting a GPU with enough VRAM to hold it without breaking the bank. You could offload layers to system RAM or even swap but that can come with pretty steep performance implications.
I haven’t heard of a model that’s comparable to GPT 4 but like I said, I’m pretty out of the loop. But, you’d still probably have the same VRAM and performance issues but even worse since bigger models usually is better.
All that being said, you might not need some huge model depending on what you’re doing. There’s some smaller models that can fit on consumer GPUs that can perform surprisingly well in certain situations. There’s also uncensored variants of models that won’t give you some moral lecture if you ask it for something questionable. Then there’s also the privacy aspect; I absolutely would not trust OpenAI with any personal information. I believe there’s a way to opt out of them using your personal data for training for personal accounts but you’re still trusting them with whatever information you send them.
I'm personally hoping the hardware mismatch issues will sort themselves out in a few years and I can wait to upgrade my GPU then.