Overview

Vidu 2 is a next-generation video generation model that strikes a balance between speed and quality. It focuses on image-to-video generation and keyframe-based video creation, supporting 720P resolution for videos up to 4 seconds long. With significantly faster generation speed and reduced cost, it addresses color distortion issues in image-to-video outputs, delivering stable and controllable visuals ideal for e-commerce scenarios. Enhanced semantic understanding between keyframes and improved consistency with multiple reference images make Vidu 2 a highly efficient tool for mass production in pan-entertainment, internet content, anime short series, and advertising.

Price

$0.2 / video

Capability

Image-to-Video Generation

Duration

4S

Clarity

720P

Capability Description

Image-to-Video Generation

Generate a video by providing a starting frame or both starting and ending frames along with corresponding text descriptions.

Start and End Frame

Support input of two images: the first uploaded image is treated as the starting frame, and the second as the ending frame. The model uses these images as input parameters to generate the video.

Reference-based Video Generation

Generate a video from a text prompt; currently supports both a general style and an anime style optimized for animation.
The URL link for the video generated by the model is valid for one day. Please save it as soon as possible if needed.

Usage

Resources

API Documentation: Learn how to call the API.

Introducting Vidu2

1

Efficient Video Generation Speed

With optimized model computing architecture, video rendering efficiency is significantly enhanced. This allows daily content teams to respond quickly to trending topics, and enables e-commerce sellers to mass-produce product display videos on demand—greatly reducing content delivery time and helping creators seize traffic windows.
2

Cost-Effective 720P Output

The cost of generating 720P resolution videos has dropped to 40% of the Q1 version. Small and medium-sized brands can now create batch videos for multiple SKUs, while advertising teams can test creative concepts like “product close-ups + scenario storytelling” at a lower cost—meeting full-platform marketing needs without breaking the content budget.
3

Stable and Controllable Image-to-Video Generation

  • The model addresses the “texture color shift” issue—accurately restoring details like the silky glow of satin or the matte finish of leather in clothing videos. In e-commerce scenarios, product colors are displayed more realistically.
  • Dynamic frame compensation is optimized, ensuring smooth, shake-free motion for rotating 3C products or hand demonstrations in beauty tutorials.
  • Multiple visual styles are supported, enabling eye-catching content like “product close-up + stylized camera movement,” ideal for e-commerce main images and short-form promotional videos.
4

Semantically Enhanced Keyframe Transition

The model strikes a balance between creativity and stability, delivering significantly improved performance and semantic understanding—making it the optimal solution for keyframe-based video generation.By accurately analyzing scene logic and action continuity, transitions between frames are smooth and natural, enhancing narrative coherence throughout the content.
5

Enhanced Consistency of Multiple Reference Images

When inputting multi-element materials, the visual style of the generated video (such as tone and lighting) can be highly unified.For example, in a cultural tourism promotional video, the transition between scenes such as the sunrise over an ancient city, street market scenes, and folk performances maintains consistency with the “Chinese style filter.”In anime IP derivative content, the actions and expressions of characters in different plot scenes can also strictly adhere to the original settings, facilitating the coherent creation of multi-scene, multi-element content.020f485a Fb03 4698 8a6c F9f89b5b7361 Jpe

Quick Start

1. Image-to-Video Generation

curl --location --request POST 'https://api.z.ai/api/paas/v4/videos/generations' \
--header 'Authorization: Bearer {your apikey}' \
--header 'Content-Type: application/json' \
--data-raw '{
    "model":"vidu2-image",
    "image_url":"https://example.com/path/to/your/image.jpg",
    "prompt":"Peter Rabbit drives a small car along the road, his face filled with joy and happiness.",
    "duration":4,
    "size":"720x480",
    "movement_amplitude":"auto"
}'

2. Start and End Frame

curl --location --request POST 'https://api.z.ai/api/paas/v4/videos/generations' \
--header 'Authorization: Bearer {your apikey}' \
--header 'Content-Type: application/json' \
--data-raw '{
    "model":"vidu2-start-end",
    "image_url":["https://example.com/path/to/your/image1.jpg","https://example.com/path/to/your/image2.jpg"],
    "prompt":"Peter Rabbit drives a small car along the road, his face filled with joy and happiness.",
    "duration":4,
    "size":"720x480",
    "movement_amplitude":"auto"
}'

3. Reference-based Video Generation

curl --location --request POST 'https://api.z.ai/api/paas/v4/videos/generations' \
--header 'Authorization: Bearer {your apikey}' \
--header 'Content-Type: application/json' \
--data-raw '{
    "model":"vidu2-reference",
    "image_url":["https://example.com/path/to/your/image1.jpg","https://example.com/path/to/your/image2.jpg","https://example.com/path/to/your/image3.jpg"],
    "prompt":"Peter Rabbit drives a small car along the road, his face filled with joy and happiness.",
    "duration":4,
    "aspect_ratio":"16:9",
    "size":"720x480",
    "movement_amplitude":"auto",
    "with_audio":true
}'