this post was submitted on 27 Oct 2023
526 points (95.0% liked)

Technology

59106 readers
3585 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] kamenlady@lemmy.world 2 points 1 year ago (1 children)

i see... I'll have to ramp up my hardware exponentially ...

[–] PeterPoopshit@lemmy.world 5 points 1 year ago* (last edited 1 year ago) (1 children)

Use llama cpp. It uses cpu so you don't have to spend $10k just to get a graphics card that meets the minimum requirements. I run it on a shitty 3.0ghz Amd 8300 FX and it runs ok. Most people probably have better computers than that.

Note that gpt4all runs on top of llama cpp and despite gpt4all having a gui, it isn't any easier to use than llamacpp so you might as well use the one with less bloat. Just remember if something isn't working on llamacpp, it's also going to not work in exactly the same way on gpt4all.

[–] kamenlady@lemmy.world 1 points 1 year ago (1 children)

Gonna look into that - thanks

[–] NotMyOldRedditName@lemmy.world 3 points 1 year ago* (last edited 1 year ago)

Check this out

https://github.com/oobabooga/text-generation-webui

It has a one click installer and can use llama.cpp

From there you can download models and try things out.

If you don't have a really good graphics card, maybe start with 7b models. Then you can try 13b and compare performance and results.

Llama.cpp will spread the load over the cpu and as much gpu as you have available (indicated by layers that you can set on a slider)