this post was submitted on 02 Oct 2023
26 points (96.4% liked)

LocalLLaMA

2235 readers
13 users here now

Community to discuss about LLaMA, the large language model created by Meta AI.

This is intended to be a replacement for r/LocalLLaMA on Reddit.

founded 1 year ago
MODERATORS
 

Trying something new, going to pin this thread as a place for beginners to ask what may or may not be stupid questions, to encourage both the asking and answering.

Depending on activity level I'll either make a new one once in awhile or I'll just leave this one up forever to be a place to learn and ask.

When asking a question, try to make it clear what your current knowledge level is and where you may have gaps, should help people provide more useful concise answers!

you are viewing a single comment's thread
view the rest of the comments
[โ€“] noneabove1182@sh.itjust.works 4 points 9 months ago (1 children)

You shouldn't need nvlink, I'm wondering if it's something to do with AWQ since I know that exllamav2 and llama.cpp both support splitting in oobabooga

[โ€“] doodlebob@lemmy.world 2 points 9 months ago

I think you're right. Saw a post on Reddit basically mentioning the same things I'm seeing.

It looks like autoawq supports it but it might be an issue with how oobabooga implements it or something...