

So, I actually like generative AI (disclaimer I feel I have to include every time: local open models only), and my main problem with that image is how genericized the new face is. If you’ve seen a lot of AI images, it’s immediately recognizable as the default mixed Asian/Caucasian face you get when not prompting something more specific than “woman” due to the datasets dominating the training data. It heavily implies all faces will be similarly genericized.
I don’t think this tech will be viable unless creators can give the AI a reference image of what a character should look like when photorealistic, and that’s just going to increase the workload of running this in realtime.









Mostly agreed. For me the actual biggest problem here is Nvidia presenting this as the assumed default experience everyone obviously wants and using a heavily genericized face as a win. The tech needs to be much more energy efficient and configurable on both the developer and end-user side before I’ll give it any serious attention.
Regarding future versions of this tech, I think “death of the author” still applies to video games, so changing artistic intent isn’t always bad, especially for games that get frequently replayed. I certainly don’t play stock Skyrim or Minecraft anymore. To use your example, yes, a photorealistic (attempt of) Ocarina of Time would probably be too off-putting, but give me style options like BotW, Spiderverse, Pixar, anime, etc.? I’d be down to try those.