This is a great guide and was really helpful when I decided to experiment to see how this works.
A couple of things that confused me when trying this out that might save the next person some time:
-
where to put models etc *.ckpt and *.safetensors files live in stanle-diffusion-webui/models/stable-diffudion These will automatically be loaded when you new start wrbui-usr.bat
-
how to change models This took me waaay longer to figure out than I’d like to admit. There’s a drop down top left of the webui to select the model after you restart
-
I find the range of models, loras, checkpoints, extensions etc overwhelming. Im still not sure exactly what each of these do and which ones I’d need. Eg: Whats a checkpoint for?
-
prompt writing is clearly a fine art and can drive you mad. For both 3&4 I found civitai.com/images to be a fantastic resource. Browse through the images for styles or images you like and most of them will have the resources used and generation data there to recreate it. I found this to be a great starting point, particularly for negative prompts.
-
deformity Deformed faces have mostly gone away for me by changing this webui setting: settings> face restoration > code former weight = 0 Just need to figure out hands and phantom limbs now…
Forgot to add, when experimenting with prompts. Use a fixed seed number so you can see how your prompt changes effect the image between each generation