this post was submitted on 08 Aug 2023
8 points (100.0% liked)

Stable Diffusion

4320 readers
33 users here now

Discuss matters related to our favourite AI Art generation technology

Also see

Other communities

founded 1 year ago
MODERATORS
 

I have been running 1.4, 1.5, 2 without issue - but everytime I try to run SDXL 1.0 (via Invoke, or Auto1111) it will not load the checkpoint.

I have the official hugging face version of the checkpoint, refiner, lora offset and VAE. They are all named properly to match how they need to. They are all in the appropriate folders. When I pick the model to load, it tries for about 20 seconds, then pops a super long error in the python instance and defaults to the last model I loaded. Oddly, it loads the refiner without issue.

Is this a case of my 8gb vram just not being enough? I have tried with the no-half/full precision arguments.

you are viewing a single comment's thread
view the rest of the comments
[–] RotaryKeyboard@lemmy.ninja 1 points 1 year ago

Those safetensors files are all that I have ever used.

For reference, I'm using a 2080 ti. That's got about 11 GB of RAM, I think. I'm not having any freezes whatsoever. I've also tried it on my wife's shiny new 4080. Definitely a speed difference, but again, no freezes or instability. Generating the 1024x1024 images does take forever. I actually went back to 512x512 and stayed there. I can always upscale something that I like.