this post was submitted on 02 Jul 2023
10 points (100.0% liked)

LocalLLaMA

2249 readers
1 users here now

Community to discuss about LLaMA, the large language model created by Meta AI.

This is intended to be a replacement for r/LocalLLaMA on Reddit.

founded 1 year ago
MODERATORS
 

So what is currently the best and easiest way to use an AMD GPU for reference I own a rx6700xt and wanted to run 13B model maybe superhot but I'm not sure if my vram is enough for that Since now I always sticked with llamacpp since it's quiet easy to setup Does anyone have any suggestion?

you are viewing a single comment's thread
view the rest of the comments
[–] Mixel@feddit.de 2 points 1 year ago

Yes thank you for the information I really appreciate it! I decided to go for kobold.cpp for the meantime with CLBlast which works just overall way better than standart CPU inference. But im looking into the ROCm LLamacpp support which I am currently trying.