this post was submitted on 23 Jul 2023
11 points (100.0% liked)
Stable Diffusion
1487 readers
1 users here now
Welcome to the Stable Diffusion community, dedicated to the exploration and discussion of the open source deep learning model known as Stable Diffusion.
Introduced in 2022, Stable Diffusion uses a latent diffusion model to generate detailed images based on text descriptions and can also be applied to other tasks such as inpainting, outpainting, and generating image-to-image translations guided by text prompts. The model was developed by the startup Stability AI, in collaboration with a number of academic researchers and non-profit organizations, marking a significant shift from previous proprietary models that were accessible only via cloud services.
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
You can run llama.cpp on the CPU with reasonable speeds making full use of normal RAM to run much larger models.
As for 10GB in SD, I run into lack of VRAM quite constantly when overdoing it, e.g. 1024x768 with multiple ControlNets and some other stuff is pretty much guaranteed to overflow it. I have to reduce the resolution when making use of ControlNet. Dreambooth training didn't even work at all for me due to lack of VRAM (might be possible to work around, but at least the defaults weren't usable).
10GB is still very much usable with SD, but one has to be aware of the limitations. The new SDXL will also increase the VRAM requirements a good bit.