OpenAI’s Sora is launching today — here are highlights from the first review
Sora, OpenAI’s video generator, is launching today — at least for some users.
YouTuber Marques Brownlee revealed the news in a video published to his channel Monday morning. Brownlee got early access to Sora, and gave his initial impressions in a 15-minute review.
Sora lives on Sora.com, Brownlee said, the homepage for which shows a scroll of recently generated and OpenAI-curated Sora videos. (It hadn’t gone live for us here at TechCrunch as of publication time.) Notably, the tool isn’t built into ChatGPT, OpenAI’s AI-powered chatbot platform. Sora seems to be its own separate product for now.
Videos on the Sora homepage can be bookmarked for later viewing to a “Saved” tab, organized into folders, and clicked on to see which text prompts were used to create them. Sora can generate videos from uploaded images as well as prompts, according to Brownlee, and can edit existing Sora-originated videos.
Using the “Re-mix” feature, users can describe changes they want to see in a video and Sora will attempt to incorporate these in a newly generated clip. Re-mix has a “strength” setting that lets users specify how drastically they want Sora to change the target video, with higher values yielding videos that take more artistic liberties.
Sora can generate up to 1080p footage, Brownlee said — but the higher the resolution, the longer videos take to generate. 1080p footage takes 8x longer than 480p, the fastest option, while 720p takes 4x longer.
Brownlee said that the average 1080p video took a “couple of minutes” to generate in his testing. “That’s also, like, right now, when almost no one else is using it,” he said. “I kind of wonder how much longer it’ll take when this is just open for anyone to use.”
In addition to generating one-off clips, Sora has a “Storyboard” feature that lets users string together prompts to create scenes or sequences of videos, Brownlee said. This is meant to help with consistency, presumably — a notorious weak point for AI video generators.
But how does Sora perform? Well, Brownlee said, it suffers from the same flaws as other generative tools out there, namely issues related to object permanence. In Sora videos, objects pass in front of each other or behind each other in ways that don’t make sense, and disappear and reappear without any reason.
Legs are another major source of problems for Sora, Brownlee said. Any time a person or animal with legs has to walk for a long while in a clip, Sora will confuse the front legs and back legs. The legs will “swap” back and forth in an anatomically impossible way, Brownlee said.
Sora has a number of safeguards built in, Brownlee said, and prohibits creators from generating footage showing people under the age of 18, containing violence or “explicit themes,” and that might infringe on a third party’s copyright. Sora also won’t generate videos from images with public figures, recognizable characters, or logos, Brownlee said, and it watermarks each video — albeit with a visual watermark that can be easily cropped out.
So, what’s Sora good for? Brownlee found it to be useful for things like title slides in a certain style, animations, abstracts, and stop-motion footage. But he stopped short of endorsing it for anything photorealistic.
“It’s impressive that it’s AI-generated video, but you can tell pretty quickly that it’s AI-generated video,” he said of the majority of Sora’s clips. “Things just get really wonky.”