SkyReels V1
SkyReels V1 is the first and most advanced open-source human-centric video foundation model. By fine-tuning HunyuanVideo on O(10M) high-quality film and television clips
Features
SkyReels V1 is a specialized generative video model designed to excel at human-centric short video synthesis. Rather than generating arbitrary motion or scenes, V1 has been specifically trained to understand actor movement, facial emotion, character framing, and scene pacing—all key components of professional film language. Its strong prior knowledge from cinematic datasets sets it apart from generalist video models. SkyReels V1 reliably generates short-form cinematic shots that feel intentional, stylized, and emotionally resonant, not struggle with distorted faces, jittery motion, or incoherent cuts
Key Features
- Human-Centric Cinematic Understanding: Trained on 10M+ carefully selected film and television shots, SkyReels V1 excels at replicating human body movement, facial dynamics, and gesture consistency.
- Text-to-Video & Image-to-Video Conditioning: Whether you begin with a prompt or a reference image, V1 provides flexible control over the scene's visual intent.
- Stylized Composition & Framing: Outputs often mimic the camera work of film—close-ups, medium shots, and over-the-shoulder compositions are all represented in the model’s learned data distribution.
- Actor & Emotion Control (emergent): Although explicit actor identity control is limited, prompts describing mood, action, or style (e.g. “woman walking through neon-lit alley, determined expression”) result in expressive, coherent motion.
- Open-Source & API Deployable: SkyReels V1 is fully open-source and optimized for deployment on WaveSpeedAI—so you can plug it directly into your creative pipeline.
ComfyUI
SkyReels-V1 is also available on ComfyUI, providing local inference capabilities through a node-based workflow. This ensures flexible and efficient video generation on your system, catering to various creative workflows.
Limitations
- Short-Form Focus: SkyReels V1 is optimized for short cinematic clips. Extended sequences or complex multi-scene narratives may exceed its current capabilities.
- No Explicit Actor Identity Control: While the model handles emotion and action well, it does not currently support fine-grained control over consistent character identity across videos.
- Stylized Over Literal: As a creative video model, outputs may favor stylized motion and framing over photorealistic accuracy—ideal for cinematic use, less so for documentary realism.
Out-of-Scope Use
The model and its derivatives may not be used in any way that violates applicable national, federal, state, local, or international law or regulation, including but not limited to:
- Exploiting, harming, or attempting to exploit or harm minors, including solicitation, creation, acquisition, or dissemination of child exploitative content.
- Generating or disseminating verifiably false information with the intent to harm others.
- Creating or distributing personal identifiable information that could be used to harm an individual.
- Harassing, abusing, threatening, stalking, or bullying individuals or groups.
- Producing non-consensual nudity or illegal pornographic content.
- Making fully automated decisions that adversely affect an individual’s legal rights or create binding obligations.
- Facilitating large-scale disinformation campaigns.
Accelerated Inference
Our accelerated inference approach leverages advanced optimization technology from WavespeedAI. This innovative fusion technique significantly reduces computational overhead and latency, enabling rapid image generation without compromising quality. The entire system is designed to efficiently handle large-scale inference tasks while ensuring that real-time applications achieve an optimal balance between speed and accuracy. For further details, please refer to the blog post.
Authentication
For authentication details, please refer to the Authentication Guide.
API Endpoints
Submit Task & Query Result
# Submit the task
curl --location --request POST "https://api.wavespeed.ai/api/v2/wavespeed-ai/SkyReels-V1" \
--header "Content-Type: application/json" \
--header "Authorization: Bearer ${WAVESPEED_API_KEY}" \
--data-raw '{
"prompt": "Camera tracking shot of the woman driving the red convertible along a cliffside road, her hair flowing in the wind, ocean reflecting sunlight",
"image": "https://d2g64w682n9w0w.cloudfront.net/media/images/1745145676350203025_T0lFAytr.jpg",
"guidance_scale": 6,
"num_inference_steps": 30,
"aspect_ratio": "16:9"
}'
# Get the result
curl --location --request GET "https://api.wavespeed.ai/api/v2/predictions/${requestId}/result" \
--header "Authorization: Bearer ${WAVESPEED_API_KEY}"
Parameters
Task Submission Parameters
Request Parameters
Parameter | Type | Required | Default | Range | Description |
---|---|---|---|---|---|
prompt | string | Yes | Camera tracking shot of the woman driving the red convertible along a cliffside road, her hair flowing in the wind, ocean reflecting sunlight | - | The prompt to generate the video from. |
image | string | Yes | https://d2g64w682n9w0w.cloudfront.net/media/images/1745145676350203025_T0lFAytr.jpg | - | URL of the image input. |
seed | integer | No | - | -1 ~ 9999999999 | Random seed for generation. If not provided, a random seed will be used. |
guidance_scale | number | No | 6 | 1 ~ 20 | Guidance scale for generation (between 1.0 and 20.0) |
num_inference_steps | integer | No | 30 | 1 ~ 50 | Number of denoising steps (between 1 and 50). Higher values give better quality but take longer. |
negative_prompt | string | No | - | - | Negative prompt to guide generation away from certain attributes. |
aspect_ratio | string | No | 16:9 | - | Aspect ratio of the output video |
Response Parameters
Parameter | Type | Description |
---|---|---|
code | integer | HTTP status code (e.g., 200 for success) |
message | string | Status message (e.g., “success”) |
data.id | string | Unique identifier for the prediction, Task Id |
data.model | string | Model ID used for the prediction |
data.outputs | array | Array of URLs to the generated content (empty when status is not completed ) |
data.urls | object | Object containing related API endpoints |
data.urls.get | string | URL to retrieve the prediction result |
data.has_nsfw_contents | array | Array of boolean values indicating NSFW detection for each output |
data.status | string | Status of the task: created , processing , completed , or failed |
data.created_at | string | ISO timestamp of when the request was created (e.g., “2023-04-01T12:34:56.789Z”) |
data.error | string | Error message (empty if no error occurred) |
data.timings | object | Object containing timing details |
data.timings.inference | integer | Inference time in milliseconds |
Result Query Parameters
Result Request Parameters
Parameter | Type | Required | Default | Description |
---|---|---|---|---|
id | string | Yes | - | Task ID |
Result Response Parameters
Parameter | Type | Description |
---|---|---|
code | integer | HTTP status code (e.g., 200 for success) |
message | string | Status message (e.g., “success”) |
data | object | The prediction data object containing all details |
data.id | string | Unique identifier for the prediction |
data.model | string | Model ID used for the prediction |
data.outputs | array | Array of URLs to the generated content (empty when status is not completed ) |
data.urls | object | Object containing related API endpoints |
data.urls.get | string | URL to retrieve the prediction result |
data.has_nsfw_contents | array | Array of boolean values indicating NSFW detection for each output |
data.status | string | Status of the task: created , processing , completed , or failed |
data.created_at | string | ISO timestamp of when the request was created (e.g., “2023-04-01T12:34:56.789Z”) |
data.error | string | Error message (empty if no error occurred) |
data.timings | object | Object containing timing details |
data.timings.inference | integer | Inference time in milliseconds |