this post was submitted on 16 Sep 2024
12 points (92.9% liked)

Stable Diffusion

4308 readers
9 users here now

Discuss matters related to our favourite AI Art generation technology

Also see

Other communities

founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] Even_Adder@lemmy.dbzer0.com 1 points 2 months ago* (last edited 2 months ago) (2 children)

I don't think so. They're going to have to do a lot better than a tutorial to win people back. That said, the two Flux models being distilled making them close to impossible to fine-tune sucks too.

[–] istanbullu@lemmy.ml 1 points 2 months ago (1 children)

kohya now supports flux fine tuning. I have seen nice examples in civitai.

[–] Even_Adder@lemmy.dbzer0.com 0 points 2 months ago

Those might just be LoRA merged models, not full fine-tuning. From what I heard, fine-tuning doesn't work because the models are distilled. You'd have to find a way to undistill them to train them.

[–] clb92@feddit.dk 1 points 2 months ago* (last edited 2 months ago) (1 children)

People have been training great Flux LoRAs for a while now, haven't they? Is a LoRA not a finetune, or have I misunderstood something?

[–] Even_Adder@lemmy.dbzer0.com 0 points 2 months ago (2 children)

Last I heard, LoRAs cause catastrophic forgetting in the model, and full fine-tuning doesn't really work.

[–] clb92@feddit.dk 2 points 2 months ago

Oh well, in practice I'll just continue to enjoy this (possibly forgetful and not-fully-finetunable) model then, that still gives me amazing results 😊

[–] erenkoylu@lemmy.ml 0 points 1 month ago* (last edited 1 month ago)

quite the opposite. Lora's are very effective against catastrophic forgetting, and full finetuning is very dangerous (but also much more powerful).