this post was submitted on 02 Oct 2023
27 points (96.6% liked)

LocalLLaMA

2249 readers
1 users here now

Community to discuss about LLaMA, the large language model created by Meta AI.

This is intended to be a replacement for r/LocalLLaMA on Reddit.

founded 1 year ago
MODERATORS
 

Trying something new, going to pin this thread as a place for beginners to ask what may or may not be stupid questions, to encourage both the asking and answering.

Depending on activity level I'll either make a new one once in awhile or I'll just leave this one up forever to be a place to learn and ask.

When asking a question, try to make it clear what your current knowledge level is and where you may have gaps, should help people provide more useful concise answers!

you are viewing a single comment's thread
view the rest of the comments
[–] drekly@lemmy.world 4 points 1 year ago (3 children)

What can I run on a 1080ti and how does it compare to what's available in general?

[–] lynx@sh.itjust.works 7 points 1 year ago (1 children)

On Huggingface is a space where you can select the model and your graphics card and see if you can run it, or how many cards you need to run it. https://huggingface.co/spaces/Vokturz/can-it-run-llm

You should be able to do inference on all 7b or smaller models with quantization.

[–] drekly@lemmy.world 5 points 1 year ago

Wow thank you I'll look into it!

load more comments (1 replies)