this post was submitted on 26 Jul 2024
28 points (76.9% liked)

Selfhosted

40183 readers
673 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 1 year ago
MODERATORS
 

Thx in advice.

you are viewing a single comment's thread
view the rest of the comments
[–] fhein@lemmy.world 15 points 3 months ago* (last edited 3 months ago)

For LLMs it entirely depends on what size models you want to use and how fast you want it to run. Since there's diminishing returns to increasing model sizes, i.e. a 14B model isn't twice as good as a 7B model, the best bang for the buck will be achieved with the smallest model you think has acceptable quality. And if you think generation speeds of around 1 token/second are acceptable, you'll probably get more value for money using partial offloading.

If your answer is "I don't know what models I want to run" then a second-hand RTX3090 is probably your best bet. If you want to run larger models, building a rig with multiple (used) RTX3090 is probably still the cheapest way to do it.