this post was submitted on 02 Oct 2023
26 points (96.4% liked)
LocalLLaMA
2235 readers
13 users here now
Community to discuss about LLaMA, the large language model created by Meta AI.
This is intended to be a replacement for r/LocalLLaMA on Reddit.
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Knowledge level: Enthusiastic spectator, I don't make or finetune llms, but I do watch AI news, try out local llms, and use things like Github copilot and chat gpt.
Question: Is it better to use code llama 34b or llama2 13b for a non coding related task?
Context: I'm able to run either model locally, but I can't run the larger 70b model. So I was wondering if running the 34b code llama would be better since it is larger. I heard that models with better coding abilities are better for other types of tasks too and that they are better with logic (I don't know if this is true I just head l heard it somewhere).
I feel like for non coding tasks you're sadly better off using a 13B model, codellama lost a lot of knowledge/chattiness from its coding fine tuning
THAT SAID it actually kind of depends on what you're trying to do, if you're aiming for RP don't bother, if you're thinking about summarization or logic tasks or RAG, codellama may do totally fine, so more info may help
If you have 24gb of VRAM (my assumption if you can load 34B) you could also play around with 70B at 2.4bpw using exllamav2 (if that made no sense lemme know if it interests you and I'll elaborate) but it'll probably be slower