Browse ModelsWavespeed AILtx 2 19b Video LoRA Trainer

Ltx 2 19b Video LoRA Trainer

Ltx 2 19b Video LoRA Trainer

Playground

Try it on WavespeedAI!

LTX-2 Audio-Video LoRA Trainer lets you train custom LoRA models with synchronized audio-video generation support. Train action, motion, and video effect models by uploading a ZIP file containing videos with optional audio. Ready-to-use REST inference API, best performance, no coldstarts, affordable pricing.

Features

LTX-2 19B Video-LoRA Trainer

LTX-2 19B Video-LoRA Trainer is a high-performance custom model training service for the LTX-2 19B video generation model. Train lightweight LoRA (Low-Rank Adaptation) adapters directly from video clips — capturing motion patterns, visual styles, and character appearances for personalized video generation with synchronized audio.


Why Choose This?

  • Train from videos Upload video clips directly to capture motion dynamics, temporal patterns, and visual styles that static images cannot convey.

  • Motion learning The trainer learns from video sequences, enabling LoRAs that understand movement, transitions, and temporal consistency.

  • Character consistency Create LoRAs that maintain character identity and motion style across generated video clips.

  • Seamless integration Trained LoRAs work directly with LTX-2 Text-to-Video LoRA and Image-to-Video LoRA models.


Training Process

  1. Data Upload Prepare and upload a ZIP file containing your training videos. Include diverse clips that represent the style, character, or motion you want to capture.

  2. Configure Trigger Word Set a unique trigger word (e.g., “p3r5on”) that will activate your trained style or character in prompts.

  3. Adjust Training Parameters

    • steps — Total training iterations (default: 500)
    • learning_rate — Training speed (default: 0.0002)
    • lora_rank — Adapter capacity (default: 32)
  4. LoRA Training The system runs a tailored LoRA optimization loop:

    • Freezes the base model weights
    • Trains only the low-rank adapter layers
    • Learns from video sequences for motion-aware adaptation
  5. Model Export After training completes, you receive a LoRA adapter file (.safetensors) compatible with:


Parameters

ParameterDefaultDescription
dataZIP file containing training videos (required)
trigger_wordUnique word to activate your trained concept
steps500Total training iterations
learning_rate0.0002Training speed (lower = more stable, higher = faster)
lora_rank32Adapter capacity (higher = more detail, larger file)

Pricing

Training StepsPrice (USD)
100$0.35
500$1.75
1,000$3.50
2,000$7.00

Billing Rules

  • Base price: $0.35 per 100 steps
  • Total cost = $0.35 × (steps / 100)
  • Billed proportionally to the total number of steps in your job

Best Use Cases

  • Motion Styles — Train on dance videos, action sequences, or specific movement patterns.
  • Character Animation — Capture how a character moves and behaves across multiple clips.
  • Brand Videos — Create consistent motion and visual style for marketing content.
  • Art Styles — Learn animated art styles from reference video clips.

Pro Tips

  • Use 5-10 diverse video clips that clearly show the style or character you want to capture.
  • Shorter clips (5-15 seconds) with consistent quality work better than long mixed footage.
  • Choose a unique trigger word that won’t conflict with common words.
  • Higher lora_rank (32-64) captures more detail but increases training time and file size.
  • Start with default settings, then adjust if needed.

Try More Trainers


Guidance


Notes

  • Higher parameter values (steps, lora_rank) will increase training time.
  • Training time scales with the number and length of videos configured.
  • For faster iterations, start with lower settings and increase gradually.
  • Video-based training captures motion patterns that image-based training cannot.

Authentication

For authentication details, please refer to the Authentication Guide.

API Endpoints

Submit Task & Query Result


# Submit the task
curl --location --request POST "https://api.wavespeed.ai/api/v3/wavespeed-ai/ltx-2-19b/video-lora-trainer" \
--header "Content-Type: application/json" \
--header "Authorization: Bearer ${WAVESPEED_API_KEY}" \
--data-raw '{
    "trigger_word": "p3r5on",
    "steps": 500,
    "learning_rate": 0.0002,
    "lora_rank": 32
}'

# Get the result
curl --location --request GET "https://api.wavespeed.ai/api/v3/predictions/${requestId}/result" \
--header "Authorization: Bearer ${WAVESPEED_API_KEY}"

Parameters

Task Submission Parameters

Request Parameters

ParameterTypeRequiredDefaultRangeDescription
datastringYes--Upload a zip file containing videos with optional audio latents for audio-video LoRA training. Each text file should have the same name as the video file it corresponds to for captions.
trigger_wordstringNop3r5on-The phrase that will trigger the model to generate a video.
stepsintegerNo500100 ~ 20000Number of steps to train the LoRA on.
learning_ratenumberNo0.00020.00000 ~ 1.00000
lora_rankintegerNo321 ~ 128

Response Parameters

ParameterTypeDescription
codeintegerHTTP status code (e.g., 200 for success)
messagestringStatus message (e.g., “success”)
data.idstringUnique identifier for the prediction, Task Id
data.modelstringModel ID used for the prediction
data.outputsarrayArray of URLs to the generated content (empty when status is not completed)
data.urlsobjectObject containing related API endpoints
data.urls.getstringURL to retrieve the prediction result
data.has_nsfw_contentsarrayArray of boolean values indicating NSFW detection for each output
data.statusstringStatus of the task: created, processing, completed, or failed
data.created_atstringISO timestamp of when the request was created (e.g., “2023-04-01T12:34:56.789Z”)
data.errorstringError message (empty if no error occurred)
data.timingsobjectObject containing timing details
data.timings.inferenceintegerInference time in milliseconds

Result Request Parameters

ParameterTypeRequiredDefaultDescription
idstringYes-Task ID

Result Response Parameters

ParameterTypeDescription
codeintegerHTTP status code (e.g., 200 for success)
messagestringStatus message (e.g., “success”)
dataobjectThe prediction data object containing all details
data.idstringUnique identifier for the prediction, the ID of the prediction to get
data.modelstringModel ID used for the prediction
data.outputsstringArray of URLs to the generated content (empty when status is not completed).
data.urlsobjectObject containing related API endpoints
data.urls.getstringURL to retrieve the prediction result
data.statusstringStatus of the task: created, processing, completed, or failed
data.created_atstringISO timestamp of when the request was created (e.g., “2023-04-01T12:34:56.789Z”)
data.errorstringError message (empty if no error occurred)
data.timingsobjectObject containing timing details
data.timings.inferenceintegerInference time in milliseconds
© 2025 WaveSpeedAI. All rights reserved.