Ltx 2 19b Video LoRA Trainer
Playground
Try it on WavespeedAI!LTX-2 Audio-Video LoRA Trainer lets you train custom LoRA models with synchronized audio-video generation support. Train action, motion, and video effect models by uploading a ZIP file containing videos with optional audio. Ready-to-use REST inference API, best performance, no coldstarts, affordable pricing.
Features
LTX-2 19B Video-LoRA Trainer
LTX-2 19B Video-LoRA Trainer is a high-performance custom model training service for the LTX-2 19B video generation model. Train lightweight LoRA (Low-Rank Adaptation) adapters directly from video clips — capturing motion patterns, visual styles, and character appearances for personalized video generation with synchronized audio.
Why Choose This?
-
Train from videos Upload video clips directly to capture motion dynamics, temporal patterns, and visual styles that static images cannot convey.
-
Motion learning The trainer learns from video sequences, enabling LoRAs that understand movement, transitions, and temporal consistency.
-
Character consistency Create LoRAs that maintain character identity and motion style across generated video clips.
-
Seamless integration Trained LoRAs work directly with LTX-2 Text-to-Video LoRA and Image-to-Video LoRA models.
Training Process
-
Data Upload Prepare and upload a ZIP file containing your training videos. Include diverse clips that represent the style, character, or motion you want to capture.
-
Configure Trigger Word Set a unique trigger word (e.g., “p3r5on”) that will activate your trained style or character in prompts.
-
Adjust Training Parameters
- steps — Total training iterations (default: 500)
- learning_rate — Training speed (default: 0.0002)
- lora_rank — Adapter capacity (default: 32)
-
LoRA Training The system runs a tailored LoRA optimization loop:
- Freezes the base model weights
- Trains only the low-rank adapter layers
- Learns from video sequences for motion-aware adaptation
-
Model Export After training completes, you receive a LoRA adapter file (.safetensors) compatible with:
Parameters
| Parameter | Default | Description |
|---|---|---|
| data | — | ZIP file containing training videos (required) |
| trigger_word | — | Unique word to activate your trained concept |
| steps | 500 | Total training iterations |
| learning_rate | 0.0002 | Training speed (lower = more stable, higher = faster) |
| lora_rank | 32 | Adapter capacity (higher = more detail, larger file) |
Pricing
| Training Steps | Price (USD) |
|---|---|
| 100 | $0.35 |
| 500 | $1.75 |
| 1,000 | $3.50 |
| 2,000 | $7.00 |
Billing Rules
- Base price: $0.35 per 100 steps
- Total cost = $0.35 × (steps / 100)
- Billed proportionally to the total number of steps in your job
Best Use Cases
- Motion Styles — Train on dance videos, action sequences, or specific movement patterns.
- Character Animation — Capture how a character moves and behaves across multiple clips.
- Brand Videos — Create consistent motion and visual style for marketing content.
- Art Styles — Learn animated art styles from reference video clips.
Pro Tips
- Use 5-10 diverse video clips that clearly show the style or character you want to capture.
- Shorter clips (5-15 seconds) with consistent quality work better than long mixed footage.
- Choose a unique trigger word that won’t conflict with common words.
- Higher lora_rank (32-64) captures more detail but increases training time and file size.
- Start with default settings, then adjust if needed.
Try More Trainers
- LTX-2 19B IC-LoRA Trainer — Train LoRAs from images for LTX-2 video generation.
- Qwen Image 2512 LoRA Trainer — Train LoRAs for Qwen Image text-to-image model.
- Z-Image LoRA Trainer — Train LoRAs for Z-Image models.
Guidance
Notes
- Higher parameter values (steps, lora_rank) will increase training time.
- Training time scales with the number and length of videos configured.
- For faster iterations, start with lower settings and increase gradually.
- Video-based training captures motion patterns that image-based training cannot.
Authentication
For authentication details, please refer to the Authentication Guide.
API Endpoints
Submit Task & Query Result
# Submit the task
curl --location --request POST "https://api.wavespeed.ai/api/v3/wavespeed-ai/ltx-2-19b/video-lora-trainer" \
--header "Content-Type: application/json" \
--header "Authorization: Bearer ${WAVESPEED_API_KEY}" \
--data-raw '{
"trigger_word": "p3r5on",
"steps": 500,
"learning_rate": 0.0002,
"lora_rank": 32
}'
# Get the result
curl --location --request GET "https://api.wavespeed.ai/api/v3/predictions/${requestId}/result" \
--header "Authorization: Bearer ${WAVESPEED_API_KEY}"
Parameters
Task Submission Parameters
Request Parameters
| Parameter | Type | Required | Default | Range | Description |
|---|---|---|---|---|---|
| data | string | Yes | - | - | Upload a zip file containing videos with optional audio latents for audio-video LoRA training. Each text file should have the same name as the video file it corresponds to for captions. |
| trigger_word | string | No | p3r5on | - | The phrase that will trigger the model to generate a video. |
| steps | integer | No | 500 | 100 ~ 20000 | Number of steps to train the LoRA on. |
| learning_rate | number | No | 0.0002 | 0.00000 ~ 1.00000 | |
| lora_rank | integer | No | 32 | 1 ~ 128 |
Response Parameters
| Parameter | Type | Description |
|---|---|---|
| code | integer | HTTP status code (e.g., 200 for success) |
| message | string | Status message (e.g., “success”) |
| data.id | string | Unique identifier for the prediction, Task Id |
| data.model | string | Model ID used for the prediction |
| data.outputs | array | Array of URLs to the generated content (empty when status is not completed) |
| data.urls | object | Object containing related API endpoints |
| data.urls.get | string | URL to retrieve the prediction result |
| data.has_nsfw_contents | array | Array of boolean values indicating NSFW detection for each output |
| data.status | string | Status of the task: created, processing, completed, or failed |
| data.created_at | string | ISO timestamp of when the request was created (e.g., “2023-04-01T12:34:56.789Z”) |
| data.error | string | Error message (empty if no error occurred) |
| data.timings | object | Object containing timing details |
| data.timings.inference | integer | Inference time in milliseconds |
Result Request Parameters
| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
| id | string | Yes | - | Task ID |
Result Response Parameters
| Parameter | Type | Description |
|---|---|---|
| code | integer | HTTP status code (e.g., 200 for success) |
| message | string | Status message (e.g., “success”) |
| data | object | The prediction data object containing all details |
| data.id | string | Unique identifier for the prediction, the ID of the prediction to get |
| data.model | string | Model ID used for the prediction |
| data.outputs | string | Array of URLs to the generated content (empty when status is not completed). |
| data.urls | object | Object containing related API endpoints |
| data.urls.get | string | URL to retrieve the prediction result |
| data.status | string | Status of the task: created, processing, completed, or failed |
| data.created_at | string | ISO timestamp of when the request was created (e.g., “2023-04-01T12:34:56.789Z”) |
| data.error | string | Error message (empty if no error occurred) |
| data.timings | object | Object containing timing details |
| data.timings.inference | integer | Inference time in milliseconds |