That’s some of the best AI porn I’ve seen. Props.
Nice. Care to share the model and prompt?
Sure. But I have to say, that it is not easy to get the black and white images with colored elements in it. Small changes in the step count or just adding one or two terms to the prompt make it a lot less likely to spit out these images, and returning only fully colored pics. The prompt which all of these are based on, with 30 steps, has about a 1 in 50 chance to return this type of pic, with all the others being fully colored (still, consistently nice though). I ran this for about 300 pics, and used the seeds from there to make some minor variations with the step count. Not ideal, and takes a lot of time, but it works. It can of course be improved by describing fewer colored elements in the prompt, but where’s the fun in that?
Anyway, I’ll add the seeds and steps for these images specifically at the end too, just because if somebody ran the prompt as is, there’s a high likelyhood that they wouldn’t come out as b&w pics…
Model: ponyRealism_v21MainVAE,
positive prompt: score_9, score_8_up, score_7_up, score_6_up, score_5_up, score_4_up, (((black and white photo))) of a woman with huge breasts leaning against a brick wall, outdoor, erotic, sensual, hard shadows, wet shirt, light ray, high contrast, tanlines, wet skin, ((rain)), full body shot, masturbation, orgasm, red hair, red dildo, lora:Expressive_H:1 Expressiveh, nsfwEMPXL, zPDXL2 zPDXLrl, dreamlike, red lips, pink nipples, photo, best quality, 8k, extremely, detailed, rating_explicit, OverallDetailXL
negative prompt: skirt, trousers, pants, panties, nsfwEMPXL-neg, zPDXL2-neg zPDXLrl-neg, painting, drawing, comic, cg, 3D, neghands-neg DeepNegative_xl_v1, text, logo, writing, watermark,
CFG scale: 7, Sampler: Euler a, Size: 1024x1024, Clip skip: 2
- pic 1: seed: 2860115259, steps: 35
- pic 2: seed: 2860115278, steps: 30
- pic 3: seed: 3964289261, steps: 40
- pic 4: seed: 2024873051, steps: 35
- pic 5: seed: 3964289261, steps: 37
- pic 6: seed: 1481916937, steps: 30
- pic 7: seed: 2860115256, steps: 35
- pic 8: seed: 3964289261, steps: 50
- pic 9: seed: 103299121, steps: 50
ponyRealism_v21MainVAE Awesome. Hate to bother, but could you point me in the direction of a decent resource for getting started with local text-to-image generation? Most of the stuff I’ve found assume some sort of baseline knowledge, or framework already installed. I run full time linux desktop, and self host a bunch of containerized stuff, but this is a whole new ball of wax.
If you’re not using Windows with an Nvidia GPU, be prepared to put in some frustrating extra work. I know, since I’m running mine on Fedora Linux with a 7900XT. And it took me three attempts and several hours of googling, just to get it running properly.
But basically, you’ll want to start out by using the Automatic1111 WebUI. Yes, it’s called WebUI, but it runs completely locally.
Just make sure you’re using the exact python version required (3.10.6). Make sure that all the dependencies are properly installed. Make sure your drivers are up to date. Make sure that you’re running the webui.sh from the same folder that also contains the webui-user.sh. And you’ll probably still run into issues.
The WebUI is probably not the best thing for Linux folks out there (though it’s gotten a lot better compared to one year ago), but it’s probably the most popular way for us common folks to interact with stable diffusion. So there’s a ton of material on how to use it out there, and that includes Linux folks asking questions with answers which aren’t just “use windows”. So if you inevitably run into trouble, you’ll get at least some hints on how to fix it.
Good luck