WaveSpeed.ai
首頁/探索/Training Tools/wavespeed-ai/ltx-2-19b/video-lora-trainer
training

training

LTX-2 Audio-Video LoRA Trainer

wavespeed-ai/ltx-2-19b/video-lora-trainer

LTX-2 Audio-Video LoRA Trainer lets you train custom LoRA models with synchronized audio-video generation support. Train action, motion, and video effect models by uploading a ZIP file containing videos with optional audio. Ready-to-use REST inference API, best performance, no coldstarts, affordable pricing.

Input

Hint: You can drag and drop a file or click to upload

Idle

您的請求將花費 $0.35 每次運行。

使用 $10 您可以運行此模型大約 28 次。

README

LTX-2 19B Video-LoRA Trainer

LTX-2 19B Video-LoRA Trainer is a high-performance custom model training service for the LTX-2 19B video generation model. Train lightweight LoRA (Low-Rank Adaptation) adapters directly from video clips — capturing motion patterns, visual styles, and character appearances for personalized video generation with synchronized audio.

Why Choose This?

  • Train from videos Upload video clips directly to capture motion dynamics, temporal patterns, and visual styles that static images cannot convey.

  • Motion learning The trainer learns from video sequences, enabling LoRAs that understand movement, transitions, and temporal consistency.

  • Character consistency Create LoRAs that maintain character identity and motion style across generated video clips.

  • Seamless integration Trained LoRAs work directly with LTX-2 Text-to-Video LoRA and Image-to-Video LoRA models.

Training Process

  1. Data Upload Prepare and upload a ZIP file containing your training videos. Include diverse clips that represent the style, character, or motion you want to capture.

  2. Configure Trigger Word Set a unique trigger word (e.g., "p3r5on") that will activate your trained style or character in prompts.

  3. Adjust Training Parameters

    • steps — Total training iterations (default: 500)
    • learning_rate — Training speed (default: 0.0002)
    • lora_rank — Adapter capacity (default: 32)
  4. LoRA Training The system runs a tailored LoRA optimization loop:

    • Freezes the base model weights
    • Trains only the low-rank adapter layers
    • Learns from video sequences for motion-aware adaptation
  5. Model Export After training completes, you receive a LoRA adapter file (.safetensors) compatible with:

Parameters

ParameterDefaultDescription
dataZIP file containing training videos (required)
trigger_wordUnique word to activate your trained concept
steps500Total training iterations
learning_rate0.0002Training speed (lower = more stable, higher = faster)
lora_rank32Adapter capacity (higher = more detail, larger file)

Pricing

Training StepsPrice (USD)
100$0.35
500$1.75
1,000$3.50
2,000$7.00

Billing Rules

  • Base price: $0.35 per 100 steps
  • Total cost = $0.35 × (steps / 100)
  • Billed proportionally to the total number of steps in your job

Best Use Cases

  • Motion Styles — Train on dance videos, action sequences, or specific movement patterns.
  • Character Animation — Capture how a character moves and behaves across multiple clips.
  • Brand Videos — Create consistent motion and visual style for marketing content.
  • Art Styles — Learn animated art styles from reference video clips.

Pro Tips

  • Use 5-10 diverse video clips that clearly show the style or character you want to capture.
  • Shorter clips (5-15 seconds) with consistent quality work better than long mixed footage.
  • Choose a unique trigger word that won't conflict with common words.
  • Higher lora_rank (32-64) captures more detail but increases training time and file size.
  • Start with default settings, then adjust if needed.

Try More Trainers

Guidance

Notes

  • Higher parameter values (steps, lora_rank) will increase training time.
  • Training time scales with the number and length of videos configured.
  • For faster iterations, start with lower settings and increase gradually.
  • Video-based training captures motion patterns that image-based training cannot.