Did they claim the video was unedited Sora output? It doesn’t sound like they had to do all that much to the output to get what they wanted. There aren’t any AI tools right now that always output exactly what you want without any alterations, so of course they had to regenerateate clips many times and fix them up manually. They still ended up with a video that required no actual filming, and that’s impressive.
Has generative video’s problem with faces and hands been solved? Not quite. We still get glimpses of warped body parts. And text is still a problem (in another video, by the creative agency Native Foreign, we see a bike repair shop with the sign “Biycle Repaich”). But everything in “Air Head” is raw output from Sora. After editing together many different clips produced with the tool, Shy Kids did a bunch of post-processing to make the film look even better. They used visual effects tools to fix certain shots of the main character’s balloon face, for example.
weird OpenAI neglected to mention that what the real artists were doing with the technology was spend a lot of time heavily editing and fucking rotoscoping its output to look barely passable
but the result was still uninteresting garbage that’s only barely notable if you think generative AI did it, and we’ve established that all the coherent parts of this were done (as usual) with the hard work of a team of uncredited humans
I am confused, was the expectation really a magic automate entire movie clip button? Because thats not how any kind of creative generative ai works in my experience.
llms are not sentient, they cannot perform “intentional reasoning” of course the showcased art is a human work. Of course the raw output has hallucinations, gpt4 is not except of that either but its still a great drafter.
The results stands to appear technologically very impressive. This kind of thing was perceived as never to be possible and improves quickly.
No cameras, no physical shooting, no actors. Just a few creatives and something to compute.
tonight’s promptfans are fucking boring and I’m cranky from openai’s shitty sora page crashing my browser so I guess all you folks doing free marketing for Sam Altman can fuck off now
The results stands to appear technologically very impressive. This kind of thing was perceived as never to be possible and improves quickly.
No cameras, no physical shooting, no actors. Just a few creatives and something to compute.
like @gnomicutterance@hachyderm.io I am begging generative AI idiots to realize how out of touch “no cameras, no physical shooting, no actors” is as a supposed milestone when it applies equally well to Xavier: Renegade Angel… except Xavier looked fucked up on purpose
yes and people 4 months ago were like “haha stupid ai art can’t draw hands” and now that’s just like not a valid argument because the tech has matured to a degree that its pretty reasonable to create something with little to no imperfections, and obviously that will happen again
Did they claim the video was unedited Sora output? It doesn’t sound like they had to do all that much to the output to get what they wanted. There aren’t any AI tools right now that always output exactly what you want without any alterations, so of course they had to regenerateate clips many times and fix them up manually. They still ended up with a video that required no actual filming, and that’s impressive.
deleted by creator
fucking called it
putting the pro in prognostication
This is the article that I read at launch:
I think what they claimed “this is what real artists can do with this technology”
Which appears exactly what this video is.
weird OpenAI neglected to mention that what the real artists were doing with the technology was spend a lot of time heavily editing and fucking rotoscoping its output to look barely passable
but the result was still uninteresting garbage that’s only barely notable if you think generative AI did it, and we’ve established that all the coherent parts of this were done (as usual) with the hard work of a team of uncredited humans
I am confused, was the expectation really a magic automate entire movie clip button? Because thats not how any kind of creative generative ai works in my experience.
llms are not sentient, they cannot perform “intentional reasoning” of course the showcased art is a human work. Of course the raw output has hallucinations, gpt4 is not except of that either but its still a great drafter.
The results stands to appear technologically very impressive. This kind of thing was perceived as never to be possible and improves quickly.
No cameras, no physical shooting, no actors. Just a few creatives and something to compute.
oh come the fuck off it, OpenAI’s marketing presents sora as exactly a magic automate entire movie clip button. here’s OpenAI marketing the stupid thing as a world simulator which is fucking laughable if it can’t maintain even basic consistency. here’s an analysis of how disappointing sora actually is
tonight’s promptfans are fucking boring and I’m cranky from openai’s shitty sora page crashing my browser so I guess all you folks doing free marketing for Sam Altman can fuck off now
also:
like @gnomicutterance@hachyderm.io I am begging generative AI idiots to realize how out of touch “no cameras, no physical shooting, no actors” is as a supposed milestone when it applies equally well to Xavier: Renegade Angel… except Xavier looked fucked up on purpose
Still proud of promptfans
* promptfondlers
Tbh I find both acceptable, and not solely because I thought of the one. Current working mental taxonomy:
Fans: the internet weird-nerds choosing to be bodyshields for this shit absent any other reason whatsoever
Fondlers: those that write the thonkpieces as demonstrated elsethread (the infosec panic one)
it’s quite telling that you don’t think that actors are “creatives” but think that “gpt-4 is a great drafter”.
well, that’s true from a certain point of view
marketing department said it so it must be right
yes and people 4 months ago were like “haha stupid ai art can’t draw hands” and now that’s just like not a valid argument because the tech has matured to a degree that its pretty reasonable to create something with little to no imperfections, and obviously that will happen again
Bold to post this in a thread about how it had many many imperfections and what it outputs has to be manually reworked by humans, still.
🎶 Iiiiiiiiii-eye-eye … have become … confidently wrong 🎶
(with only the tiniest apology for the massacre of waters’ lyrics this happens to be)
lying is immoral
they still can’t draw hands