Overview
Vidu 2 is a next-generation video generation model that strikes a balance between speed and quality. It focuses on image-to-video generation and keyframe-based video creation, supporting 720P resolution for videos up to 4 seconds long. With significantly faster generation speed and reduced cost, it addresses color distortion issues in image-to-video outputs, delivering stable and controllable visuals ideal for e-commerce scenarios. Enhanced semantic understanding between keyframes and improved consistency with multiple reference images make Vidu 2 a highly efficient tool for mass production in pan-entertainment, internet content, anime short series, and advertising.- vidu2-image
- vidu2-start-end
- vidu2-reference
Price
$0.2 / video
Capability
Image-to-Video Generation
Duration
4S
Clarity
720P
Capability Description
Image-to-Video Generation
Generate a video by providing a starting frame or both starting and ending frames along with corresponding text descriptions.
Start and End Frame
Support input of two images: the first uploaded image is treated as the starting frame, and the second as the ending frame. The model uses these images as input parameters to generate the video.
Reference-based Video Generation
Generate a video from a text prompt; currently supports both a general style and an anime style optimized for animation.
The URL link for the video generated by the model is valid for one day. Please save it as soon as possible if needed.
Usage
General Entertainment Content Generation
General Entertainment Content Generation
- Input a single frame or IP elements to quickly generate short videos with coherent storylines and interactive special effects
- Supports diverse visual styles from anime-inspired to realistic
- Tailored for mass production of UGC creative content on short video platforms
Anime Short Drama Production
Anime Short Drama Production
- Input static character images or keyframes to generate smooth animated sequences and micro-dramas
- Accurately reproduce detailed character movements (e.g., facial expressions)
- Supports mass production in various styles such as Chinese and Japanese anime
- Designed to meet animation studios’ needs for IP-based content expansion
Advertising & E-commerce Marketing
Advertising & E-commerce Marketing
- Input real product images to intelligently generate dynamic advertising videos
- Clearly showcase product features such as 3C details and beauty product textures
- Automatically adapt to various platform formats, such as vertical videos for Tiktok and horizontal layouts for social feeds
Resources
API Documentation: Learn how to call the API.Introducting Vidu2
1
Efficient Video Generation Speed
With optimized model computing architecture, video rendering efficiency is significantly enhanced. This allows daily content teams to respond quickly to trending topics, and enables e-commerce sellers to mass-produce product display videos on demand—greatly reducing content delivery time and helping creators seize traffic windows.
2
Cost-Effective 720P Output
The cost of generating 720P resolution videos has dropped to 40% of the Q1 version. Small and medium-sized brands can now create batch videos for multiple SKUs, while advertising teams can test creative concepts like “product close-ups + scenario storytelling” at a lower cost—meeting full-platform marketing needs without breaking the content budget.
3
Stable and Controllable Image-to-Video Generation
- The model addresses the “texture color shift” issue—accurately restoring details like the silky glow of satin or the matte finish of leather in clothing videos. In e-commerce scenarios, product colors are displayed more realistically.
- Dynamic frame compensation is optimized, ensuring smooth, shake-free motion for rotating 3C products or hand demonstrations in beauty tutorials.
- Multiple visual styles are supported, enabling eye-catching content like “product close-up + stylized camera movement,” ideal for e-commerce main images and short-form promotional videos.
4
Semantically Enhanced Keyframe Transition
The model strikes a balance between creativity and stability, delivering significantly improved performance and semantic understanding—making it the optimal solution for keyframe-based video generation.By accurately analyzing scene logic and action continuity, transitions between frames are smooth and natural, enhancing narrative coherence throughout the content.
5
Enhanced Consistency of Multiple Reference Images
When inputting multi-element materials, the visual style of the generated video (such as tone and lighting) can be highly unified.For example, in a cultural tourism promotional video, the transition between scenes such as the sunrise over an ancient city, street market scenes, and folk performances maintains consistency with the “Chinese style filter.”In anime IP derivative content, the actions and expressions of characters in different plot scenes can also strictly adhere to the original settings, facilitating the coherent creation of multi-scene, multi-element content.

Quick Start
1. Image-to-Video Generation
- Curl
- Python
- Java
2. Start and End Frame
- Curl
- Python
3. Reference-based Video Generation
- Curl
- Python