this post was submitted on 23 Jul 2023
11 points (100.0% liked)

Stable Diffusion

1487 readers
1 users here now

Welcome to the Stable Diffusion community, dedicated to the exploration and discussion of the open source deep learning model known as Stable Diffusion.

Introduced in 2022, Stable Diffusion uses a latent diffusion model to generate detailed images based on text descriptions and can also be applied to other tasks such as inpainting, outpainting, and generating image-to-image translations guided by text prompts. The model was developed by the startup Stability AI, in collaboration with a number of academic researchers and non-profit organizations, marking a significant shift from previous proprietary models that were accessible only via cloud services.

founded 1 year ago
MODERATORS
 

I was curious, do you run Stable Diffusion locally? On someone else's server? What kind of computer do you need to run SD locally?

top 14 comments
sorted by: hot top controversial new old
[–] IanM32@lemmy.world 5 points 1 year ago (1 children)

I run it locally. I prefer having the most control I can over the install, what extensions I want to use, etc.

The most important thing to run it in my opinion is VRAM. The more the better, as much as you can get.

[–] korewa@reddthat.com 2 points 1 year ago (2 children)

I run locally too. I have a 10gb 3080.

I haven’t had vram issues could you elaborate on your statement?

I know on local llama I have been limited to 13b models

[–] IanM32@lemmy.world 2 points 1 year ago

Stable Diffusion loves VRAM. The larger and more complex the images you're trying to produce, the more it'll eat.

My line of thinking is that if you have a slower GPU it'll generate slower, sure, but if you run out of VRAM it'll straight up fail and shout at you.

I'm not an expert in this field though, so grain of salt, YMMV, all that.

[–] lloram239@feddit.de 1 points 1 year ago

I know on local llama I have been limited to 13b models

You can run llama.cpp on the CPU with reasonable speeds making full use of normal RAM to run much larger models.

As for 10GB in SD, I run into lack of VRAM quite constantly when overdoing it, e.g. 1024x768 with multiple ControlNets and some other stuff is pretty much guaranteed to overflow it. I have to reduce the resolution when making use of ControlNet. Dreambooth training didn't even work at all for me due to lack of VRAM (might be possible to work around, but at least the defaults weren't usable).

10GB is still very much usable with SD, but one has to be aware of the limitations. The new SDXL will also increase the VRAM requirements a good bit.

[–] DrRatso@lemmy.ml 3 points 1 year ago

Runs fine with 1660, but that is about 20 sec for 512x512 and upscaling takes upwards of a minute.

If you want to run it online I suggest paperspace paid tier, not too big of a hassle to set up but you might have to wait a couple mins spamming refresh to get a better GPU, the instance can run for 6 hours, then it will be autoshutdown. Generally 2-4sec for 512 and 10-20 for 1024. Also, you will have to either download models every time, settle for only two or three models at a time or fork up a couple extra bucks for the permanent storage as base paid is only 15GB.

[–] lloram239@feddit.de 2 points 1 year ago

Locally with automatic1111, I'd say 10GB VRAM as a starting point to have a good experience (can do up to 1024x768 in a single go or a little less when ControlNet is involved). Though one can never have enough VRAM, when buying a new card I'd aim for 16GB VRAM to have some spare room for future models. Image generation takes about ~30sec. Upscaling is possible, but can take quite a while and results can be a bit hit and miss.

The plain StableDiffusion model is largely useless these days, go over to https://civitai.com/ and download something custom trained, they give far better results. ControlNet is another absolute must have and gives a lot of control over the resulting image (pose, 3d shape, sketch), along with a far superior inpainting, compared to img2img.

I have been playing around with SD for about half a year and it still blows my mind what kind of results you can get with rather minuscule effort. Though worth mentioning that the results are largely dictated by the AI, trying to get very specific results can be an absolute nightmare.

[–] voluntaryexilecat@lemmy.dbzer0.com 2 points 1 year ago (2 children)

locally, always.

I even got it to run without GPU on a pure old i5 CPU with 8GB system RAM (not VRAM) paired with 32GB swap. SD1.5 takes 4-10 minutes per image, SDXL about 2 hours. But it works. With GPU its between 7 and 90 seconds per image, depending on model and settings.

[–] the_stormcrow@lemmy.ml 2 points 1 year ago (1 children)

What were your settings for the CPU usage? I have an older laptop it would be fun to get it running on.

Just install ComfyUI and start it with the --cpu flag. Ensure you have enough system RAM and a swap partition (preferably on nvme/ssd).

[–] Blamemeta@lemm.ee 1 points 1 year ago

How did you get to use regular ram?

[–] BitSound@lemmy.world 1 points 1 year ago

I run it locally with an 11gb 1080 TI. It's just exposed on the local network, so I still use the main SD website if I'm out and about somewhere.

[–] B0rax@feddit.de 1 points 1 year ago

I have it running on my M2 MacBook Air (16gb RAM)

[–] JackbyDev@programming.dev 1 points 1 year ago

I run it locally. I have a CPU from 2009. All you need is a good GPU.

[–] teichflamme@lemm.ee 1 points 1 year ago

I am running it locally with a 3060 TI and it works decently well.