Economy

New “Stable Video Diffusion” AI model can animate any still image

Enlarge / Still examples of images animated using Stable Video Diffusion by Stability AI. (credit: Stability AI)

On Tuesday, Stability AI released Stable Video Diffusion, a new free AI research tool that can turn any still image into a short video—with mixed results. It’s an open-weights preview of two AI models that use a technique called image-to-video, and it can run locally on a machine with an Nvidia GPU.

Last year, Stability AI made waves with the release of Stable Diffusion, an “open weights” image synthesis model that kick started a wave of open image synthesis and inspired a large community of hobbyists that have built off the technology with their own custom fine-tunings. Now Stability wants to do the same with AI video synthesis, although the tech is still in its infancy.

Right now, Stable Video Diffusion consists of two models: one that can produce image-to-video synthesis at 14 frames of length (called “SVD”), and another that generates 25 frames (called “SVD-XT”). They can operate at varying speeds from 3 to 30 frames per second, and they output short (typically 2-4 second-long) MP4 video clips at 576×1024 resolution.

Read 6 remaining paragraphs | Comments

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button
Close
Close