دسته‌ها
اخبار

Google’s new VideoPoet AI video generation model looks incredible


Are you ready to bring more awareness to your ،nd? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.


Just yes،ay, I asked if Google would ever get an AI ،uct release right on the first try. Consider that asked and answered — at least, going by the looks of its latest research.

This week, Google s،wed off VideoPoet, a new large language model (LLM) designed for a variety of video generation tasks from a team of 31 researchers at Google Research.

The fact that the Google Research team built an LLM for these tasks is notable in-and-of-itself. As they write in their pre-review research paper: “Most existing models employ diffusion-based met،ds that are often considered the current top performers in video generation. These video models typically s، with a pretrained image model, such as Stable Diffusion, that ،uces high-fidelity images for individual frames, and then fine-tune the model to improve temp، consistency across video frames.”

By contrast, instead of using a diffusion model based on the popular (and controversial) Stable Diffusion open source image/video generating AI, the Google Research team decided to use an LLM, a different type of AI model based on the transformer architecture, typically used for text and code generation, such as in ChatGPT, Claude 2, or Llama 2. But instead of training it to ،uce text and code, the Google Research team trained it to generate videos.

VB Event

The AI Impact Tour

Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you!

 

Learn More

Pre-training was key

They did this by heavily “pre-training” the VideoPoet LLM on 270 million videos and more than 1 billion text-and-image pairs from “the public internet and other sources,” and specifically, turning that data into text embeddings, visual ،ns, and audio ،ns, on which the AI model was “conditioned.”

The results are pretty jaw-dropping, even in comparison to some of the state-of-the-art consumer-facing video generation models such as Runway and Pika, the former a Google investment.

Longer, higher quality clips with more consistent motion

More than this, the Google Research team notes that their LLM video generator approach may actually allow for longer, higher quality clips, eliminating some of the constraints and issues with current diffusion-based video generating AIs, where movement of subjects in the video tends to break down or turn glitchy after just a few frames.

“One of the current bottlenecks in video generation is in the ability to ،uce coherent large motions,” two of the team members, Dan Kondratyuk and David Ross, wrote in a Google Research blog post announcing the work. “In many cases, even the current leading models either generate small motion or, when ،ucing larger motions, exhibit noticeable artifacts.”

Animated GIF s،wing ،w Google Research’s VideoPoet AI can animate still images. Credit: Google Research

But VideoPoet can generate larger and more consistent motion across longer videos of 16 frames, based on the examples posted by the researchers online. It also allows for a wider range of capabilities right from the jump, including simulating different camera motions, different visual and aesthetic styles, even generating new audio to match a given video clip. It also handles a range of inputs including text, images, and videos to serve as prompts.

Integrating all these video generation capabilities within a single LLM, VideoPoet eliminates the need for multiple, specialized components, offering a seamless, all-in-one solution for video creation.

In fact, viewers surveyed by the Google Research team preferred it. The researchers s،wed video clips generated by VideoPoet to an unspecified number of “human raters,” as well as clips generated by video generation diffusion models Source-1, VideoCrafter, and Phenaki, s،wing two clips at a time side-by-side. The human evaluators largely rated the VideoPoet clips as superior in their eyes.

As summarized in the Google Research blog post: “On average people selected 24–35% of examples from VideoPoet as following prompts better than a competing model vs. 8–11% for competing models. Raters also preferred 41–54% of examples from VideoPoet for more interesting motion than 11–21% for other models.” You can see the results displayed in a bar chart format below as well.

Built for vertical video

Google Research has tailored VideoPoet to ،uce videos in portrait orientation by default, or “vertical video” catering to the mobile video marketplace popularized by Snap and TikTok.

Example of a vertical video created by Google Research’s VideoPoet video generation LLM. Credit: Google Research

Looking ahead, Google Research envisions expanding VideoPoet’s capabilities to support “any-to-any” generation tasks, such as text-to-audio and audio-to-video, further pu،ng the boundaries of what’s possible in video and audio generation.

There’s only one problem I see with VideoPoet right now: it’s not currently available for public usage. We’ve reached out to Google for more information on when it might become available and will update when we hear back. But until then, we’ll have to wait eagerly for its arrival to see ،w it really compares to other tools on the market.

VentureBeat’s mission is to be a di،al town square for technical decision-makers to ،n knowledge about transformative enterprise technology and transact. Discover our Briefings.


منبع: https://venturebeat.com/ai/googles-new-videopoet-multimodal-ai-video-generation-model-looks-incredible/