this post was submitted on 22 Oct 2024
-6 points (28.6% liked)

Selfhosted

39948 readers
417 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 1 year ago
MODERATORS
 

I'm looking for a resource efficient AI model for text generation (math, coding etc.) that will work with LocalAI. Which model should I use? I don't want it to use more than 1-3 GB RAM. I'll run it on a vps to use with Nextcloud.

Edit: I'm use Mistral AI and Groq.com instead of selfhosting the models. They both have generous free plan.

top 7 comments
sorted by: hot top controversial new old
[–] leisesprecher@feddit.org 24 points 2 weeks ago (1 children)

None. There is no model that can output anything even remotely usable on that tiny amount of RAM and certainly not using the few CPU cycles your vps has to offer.

[–] hendrik@palaver.p3x.de 7 points 2 weeks ago* (last edited 2 weeks ago) (1 children)

Correct answer. There is no general purpose AI model that can fit ino 1GB. These small models exist, but they do very specific small tasks. Sentiment analysis, object detection, word embeddings for vector databases...

For coding, answering questions and generating text, you'd need like 6-8GB minimum. For maths way more than that and they'll still be throwing dice instead of giving correct answers.

[–] voracitude@lemmy.world 6 points 2 weeks ago (1 children)

Hello I would like to run a neural network to play Cyberpunk 2077 at max settings, only catch is my rig is a month old potato, my monitor is a cracked windshield I ripped off the wreck of an old Pontiac at the local junkyard, the night attendant feels bad for me so he lets me scavenge sometimes, plz help

[–] emuspawn@orbiting.observer 6 points 2 weeks ago (1 children)

try pfizer/poppy-lrud-normal-128, run it straight offff your neural chip and feed it 1 GB RAM you'll be gud2go

[–] voracitude@lemmy.world 4 points 2 weeks ago

This worked, thank

[–] brucethemoose@lemmy.world 4 points 1 week ago

To actually answer this, you could look into free APIs of open source models, which have daily limits but are otherwise largely catch-free. You could even mirror endpoints on your VPS if you need to, or host "middleware" like prompt formatters and enhancers.

I say this because, as others said, you cannot actually host AI on a VPS...

[–] Showroom7561@lemmy.ca 2 points 2 weeks ago

I dont know if any specific model will be the right answer, but Qualcomm has their Snapdragon event going on right now, and many of the advancements they are touting are specifically for local AI processing.

So, computing power will improve significantly over the next few years, with AI being the largest benefactor.