[ad_1]
In the year Founded in 2018, Runway has been developing AI-powered video-editing software for several years. The tools are used by TikTokers and YouTubers as well as major film and TV studios. Creators of The Late Show with Stephen Colbert He used Runway software to edit the show’s graphics; The visual effects team behind the hit movie Everything everywhere all at once He used the company’s technology to create certain scenes.
In the year In 2021, Ranway teamed up with researchers at the University of Munich to build the first version of a steady stream. Stability AI, a UK-based start-up, then stepped in to pay for the computational costs needed to train the model on large amounts of data. In the year In 2022, Stability AI took Stable Diffusion mainstream, turning it from a research project into a global phenomenon.
But the two companies no longer collaborate. Getty is now taking legal action against Stable AI — the company allegedly used Getty images shown in Stable Diffusion’s training data without permission — and Runway wants to keep its distance.
Gen-1 represents a new beginning for Runway. It follows text-to-video models that came out late last year, including Made Video from Meta and Pinaki from Google, both of which can generate very short video clips from scratch. It’s also similar to Dreamix, announced last week by Google, which can create new videos from existing ones by applying certain styles. But at least judging from Runway’s demo reel, the Gen-1 seems to be up there in terms of video quality. Because it converts existing footage, it can make much longer videos than most previous models. (The company says it will put technical details about the Gen-1 on its website in the next few days.)
Unlike Meta and Google, Runway built its model with customers in mind. “This is one of the first models developed in close proximity to the video maker community,” says Valenzuela. “He comes with years of understanding of how filmmakers and VFX editors work in post-production.”
Gen-1, which runs on the cloud via Runway’s website, is being rolled out today to a handful of invited users and will be available to everyone on the waiting list in a few weeks.
Last year’s explosion in generative AI was fueled by millions of people getting their hands on powerful creative tools for the first time and sharing what they made. Valenzuela hopes that putting Gen-1 in the hands of creative professionals will soon have the same impact on video.
“We’re very close to making full-length films,” he says. We’re featured where most of the content you see online originates.
[ad_2]
Source link