Skip to content

Stability AI Announces Its First Generative AI Video Tool

Stable Video Diffusion is accessible in a research preview phase.

Photo by Onur Binay / Unsplash

Stability AI has launched its first generative video model, Stable Video Diffusion, built upon the image model Stable Diffusion framework. Currently accessible in a research preview phase, the new AI video tool marks a significant milestone in providing versatile models suitable for a broad array of users.

Details of the release, including the model code, have been published on Stability AI's GitHub repository, with the necessary weights for local deployment available via their Hugging Face page. To gain a deeper understanding of its technical side, you can visit the research paper.

The video model is designed for flexibility, capable of tasks such as multi-view synthesis from a single image, providing a base for a multitude of additional models akin to the ecosystem around Stable Diffusion.

For this phase of the rollout, Stability AI highlights the model's exclusive availability for research purposes, collecting community feedback on safety and quality to refine the model before a wider release.

Comments

Latest