this post was submitted on 25 Dec 2023
1922 points (97.9% liked)
People Twitter
5390 readers
2179 users here now
People tweeting stuff. We allow tweets from anyone.
RULES:
- Mark NSFW content.
- No doxxing people.
- Must be a tweet or similar
- No bullying or international politcs
- Be excellent to each other.
- Provide an archived link to the tweet (or similar) being shown if it's a major figure or a politician.
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
It's getting there. In the next few years as hardware gets better and models get more efficient we'll be able to run these systems entirely locally.
I'm already doing it, but I have some higher end hardware.
Could you please share your process for us mortals ?
Stable diffusion SXDL Turbo model running in Automatic1111 for image generation.
Ollama with Ollama-webui for an LLM. I like the Solar:7b model. It's lightweight, fast, and gives really good results.
I have some beefy hardware that I run it on, but it's not necessary to have.
Depends on what AI you're looking for. I don't know of an LLM (a language model,think chatgpt) that works decently on personal hardware, but I also haven't really looked. For art generation though, look up automatic1111 installation instructions for stable diffusion. If you have a decent GPU (I was running it on a 1060 slowly til I upgraded) it's a simple enough process to get started, there's tons of info online about it, and it's all run on local hardware.
Ollama with ollama-webui. Models like solar-10.7b and mistral-7b work nicely on local hardware. Solar 10.7b should work well on a card with 8GB of vram.
If you have really low specs use the recently open sourced Microsoft Phi model.