this post was submitted on 02 Oct 2023
26 points (96.4% liked)

LocalLLaMA

2249 readers
1 users here now

Community to discuss about LLaMA, the large language model created by Meta AI.

This is intended to be a replacement for r/LocalLLaMA on Reddit.

founded 1 year ago
MODERATORS
 

Trying something new, going to pin this thread as a place for beginners to ask what may or may not be stupid questions, to encourage both the asking and answering.

Depending on activity level I'll either make a new one once in awhile or I'll just leave this one up forever to be a place to learn and ask.

When asking a question, try to make it clear what your current knowledge level is and where you may have gaps, should help people provide more useful concise answers!

you are viewing a single comment's thread
view the rest of the comments
[–] drekly@lemmy.world 4 points 1 year ago (2 children)

What can I run on a 1080ti and how does it compare to what's available in general?

[–] lynx@sh.itjust.works 7 points 1 year ago (1 children)

On Huggingface is a space where you can select the model and your graphics card and see if you can run it, or how many cards you need to run it. https://huggingface.co/spaces/Vokturz/can-it-run-llm

You should be able to do inference on all 7b or smaller models with quantization.

[–] drekly@lemmy.world 5 points 1 year ago

Wow thank you I'll look into it!

[–] justynasty@lemmy.kya.moe 1 points 1 year ago

You can download 7B, 13B Q_8 models for such gpu. 30B Q_2 models would probably run out of memory.

This shows that larger models have lower perplexity (i.e. more coherent). You can run conversational models, but not those with infinite knowledge base.

Most of the paid services that provide open-source models use 13B models (for $15 per month); you can run those for free on your card.

Someone else needs to recommend a tool to run models locally.