Bing
promt:
Food product. Ready to eat meal that cheap and save time on preparation. Dried food. Feature children in ad. Dinner table, family. The ad should be black and white. Visual of family in the ads should be vintage (1980s) era. Fallout styles ads.
I guess that brute-forcing can work.
For images with multiple passages of text, like this one, can maybe combine with inpainting on image generators that provide that (so that once you get one piece text the way you want it, you can leave it alone and go generate others).
There’s a technique I saw that someone did, not to solve this problem, but to remove text, was commenting on it a few days ago. Basically, there’s good OCR software out there, and it’s capable of detecting text of various sorts. So detextify just keeps running OCR software on a generated image detecting text, getting the bounding box on the text from the OCR software, and then re-running an inpaint on that bounding box until the OCR software can’t detect any text. It’s not incredibly compute-efficient, but it is cheap in terms of human time.
I suppose that as long as the OCR software can handle actually reading the text, it might be possible to use a similar technique, but instead of repeating until the OCR software is unable to find text, repeating until it finds text that matches the desired string.