Thank you! I’m trying hard to get my own style to show through and not just be a prompt with a standard style.
I have taken three models and linked them together in Stable Diffusion and have been working on a set of styles and prompts to get an output that fits what I’m looking for. I’ve probably put hundreds of hours into it this year already.
I can get more detailed and but does that scratch the itch?
I’m interested. If I knew more about your process I might be able to suggest some resources I came across recently, if you’re not already making use of them that is.
My process: Took an image I liked on DA and put it into Chat GPT and asked it to describe the image in detail as a base prompt. I took that prompt and edited it to take it from a nice painting of a garden scene to this. I then added in some of my style prompts I’ve been working on for this art style and passed it through SD via A1111 a few times until I got what was pretty close to what was in my head.
I use three models glued together in SD, one for photographs, one for digital art/painting, and one for anime.
I’m planning on making a lora for the style I am trying to get but I just became a grandfather and have been spending a lot of time with my new granddaughter.
I use Loras on occasion but not often. I didn’t here. I find that don’t tend to get me what I’m looking for, but I’m still learning. And I’m not sure what an embedding is.
It’s sort of like a crystallized collection of tokens that tell the model something precise you want without having to enter the tokens yourself. I explain it more here.
Very nice. How’d you do it? This doesn’t feel like a normal gen.
Thank you! I’m trying hard to get my own style to show through and not just be a prompt with a standard style.
I have taken three models and linked them together in Stable Diffusion and have been working on a set of styles and prompts to get an output that fits what I’m looking for. I’ve probably put hundreds of hours into it this year already.
I can get more detailed and but does that scratch the itch?
Can you elaborate on how you ‘link’ three models together? Do you import one image into another to act as a starting point or something?
Here is the article that best describes it: https://civitai.com/articles/2370/model-merging-management-how-to-merge-stable-diffusion-models-to-fit-your-style
I’m interested. If I knew more about your process I might be able to suggest some resources I came across recently, if you’re not already making use of them that is.
My process: Took an image I liked on DA and put it into Chat GPT and asked it to describe the image in detail as a base prompt. I took that prompt and edited it to take it from a nice painting of a garden scene to this. I then added in some of my style prompts I’ve been working on for this art style and passed it through SD via A1111 a few times until I got what was pretty close to what was in my head.
I use three models glued together in SD, one for photographs, one for digital art/painting, and one for anime.
I’m planning on making a lora for the style I am trying to get but I just became a grandfather and have been spending a lot of time with my new granddaughter.
Did you use any embeddings or LoRAs?
I use Loras on occasion but not often. I didn’t here. I find that don’t tend to get me what I’m looking for, but I’m still learning. And I’m not sure what an embedding is.
It’s sort of like a crystallized collection of tokens that tell the model something precise you want without having to enter the tokens yourself. I explain it more here.
I’ll check it out, thank you!!!