this post was submitted on 04 Jan 2024
25 points (100.0% liked)
LocalLLaMA
2244 readers
1 users here now
Community to discuss about LLaMA, the large language model created by Meta AI.
This is intended to be a replacement for r/LocalLLaMA on Reddit.
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
First few quants are up: https://huggingface.co/bartowski/WizardCoder-33B-V1.1-exl2
4.25 should fit nicely into 24gb (3090, 4090)
Smaller sizes still being created, 3.5, 3.0, and 2.4
I am curious if you prefer WizardCoder over Phind https://huggingface.co/Phind/Phind-CodeLlama-34B-v2
I am running them on my 3090, although the machine also has 32GBs of RAM as well.
I don't have a lot of experience with either at this time, I've used them here and there for programming questions but usually I stick to 7b models because I use them for code completion and I only find that useful if it completes the code before I do lol
That said, I've had overall good answers from either whenever I've decided to pull them out, it feels like wizard coder should be better since it's so much newer but overall it hasn't been that different. Wish phind would release an update :(
That makes sense, thank you for sharing.
I tend to use CoPilot for IDE code completion, and I use the 34B models for automated refactors and code transforms where accuracy is a requirement.