this post was submitted on 02 Aug 2023
92 points (94.2% liked)

Technology

59197 readers
3563 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

They also claim that it only takes about 8 seconds to generate various good images.

you are viewing a single comment's thread
view the rest of the comments
[–] ubermeisters@lemmy.world 7 points 1 year ago (1 children)

Pretty neat. The training process takes a while for textual inversion, which I have enjoyed playing around with. I hope Automatic1111 gets support for this method of training, if it takes off!

[–] AngrilyEatingMuffins@kbin.social 2 points 1 year ago (1 children)
[–] ubermeisters@lemmy.world 3 points 1 year ago

Great question, I wondered the same thing. I've got a decent knowledge base where stable diffusion (text to image etc) is concerned, and understand the applications of this Nvidia process, I'm not familiar enough with customization options for LLMs. I haven't really seen references to hypernetwork/lora/midjourney type applications in LLMs, or anything that really "plugs into" your existing model to augment results, the way stable diffusion is geared for customization. It seems in my limited understanding, that customization for LLMs requires customization of the training ing data, and a completely new training process for the actual model, not a reference model like SD.