Introducing WaveSpeedAI LTX 2 19b Ic LoRA Trainer on WaveSpeedAI
Introducing LTX-2 19B IC-LoRA Trainer: Create Custom Video-to-Video Transformations
The world of AI video generation has entered a new era. With the launch of the LTX-2 19B IC-LoRA Trainer on WaveSpeedAI, creators and developers can now train custom In-Context LoRA adapters for precise video-to-video transformations—unlocking capabilities like depth control, pose adaptation, video restoration, and style transfer without managing complex GPU infrastructure.
What is LTX-2 IC-LoRA Trainer?
The LTX-2 IC-LoRA Trainer is a specialized training service built on Lightricks’ powerful 19-billion parameter Diffusion Transformer architecture. Unlike traditional LoRAs that simply modify visual styles or effects, In-Context LoRAs (IC-LoRAs) enable you to condition video generation on reference signals—giving you frame-level precision over spatial structure and motion.
This means you can train custom adapters that transform videos based on depth maps, pose skeletons, edge detections, or any paired video data you provide. The result? Your AI-generated videos maintain precise structural control while inheriting the creative flexibility of the LTX-2 base model.
Key Features
- Video-to-Video Transformation Training: Upload paired video datasets to train adapters for custom control tasks—from depth map animations to character motion retargeting
- Multiple Control Modalities: Create adapters for depth control, pose control, canny edge detection, video restoration, style transfer, and more
- Lightweight LoRA Architecture: Low-Rank Adaptation keeps file sizes small while capturing complex transformations, making deployment fast and efficient
- Seamless Integration: Trained LoRAs work directly with LTX-2 Text-to-Video LoRA and Image-to-Video LoRA inference endpoints
- Stack Multiple Adapters: Load up to three LoRA modules simultaneously for combined effects and controls
- Configurable Training Parameters: Fine-tune steps, learning rate, and LoRA rank to balance quality, training time, and adapter capacity
Powerful Use Cases
Depth and Pose Control Adapters
Train custom adapters that transform depth maps or pose skeletons into fully realized videos. This enables workflows where you can animate 3D scene layouts, retarget character motion from reference performances, or generate videos that precisely follow skeletal animations—all while maintaining LTX-2’s high-quality output.
Video Restoration and Detail Enhancement
Create detailer IC-LoRAs that enhance fine textures and recover details in generated videos. By training on pairs of low-quality and high-quality video samples, you can build restoration adapters that elevate output quality for specific content types or visual domains.
Style Transfer Pipelines
Lock video generation to specific aesthetic guidelines—cinematic looks, anime styles, brand color palettes, or artistic treatments. Train on paired examples of source and styled videos to create adapters that transform any content into your signature visual style.
Brand-Consistent Content Production
For marketing teams and agencies, train adapters that ensure every generated video matches brand guidelines. From motion patterns to color grading, your custom IC-LoRA becomes a reusable asset for scalable video production.
Motion Pattern Learning
Capture specific motion dynamics—whether it’s the flow of fabric, the movement of particles, or unique animation styles. Your trained adapter can then apply these motion patterns to new content, enabling consistent motion branding across projects.
Getting Started on WaveSpeedAI
Training your custom IC-LoRA on WaveSpeedAI is straightforward:
Step 1: Prepare Your Training Data
Create a ZIP file containing paired videos that demonstrate the transformation you want to learn. For depth control, include depth map videos alongside their target outputs. For style transfer, pair source videos with styled versions. Include 10-20 diverse, high-quality examples for best results.
Step 2: Configure Your Training
Set a unique trigger word that will activate your trained transformation—something distinctive like “depth2vid” or “mystyle_v1” that won’t conflict with common vocabulary.
Adjust parameters based on your needs:
- Steps (default: 500): More steps capture finer details but increase training time
- Learning Rate (default: 0.0002): Lower values provide more stable training
- LoRA Rank (default: 32): Higher ranks (32-64) enable more dramatic transformations
Step 3: Launch Training
Submit your training job through the WaveSpeedAI API. The system handles all GPU allocation and optimization automatically.
import wavespeed
output = wavespeed.run(
"wavespeed-ai/ltx-2-19b/ic-lora-trainer",
{
"data": "https://your-storage.com/training-pairs.zip",
"trigger_word": "depth2vid",
"steps": 500,
"lora_rank": 32
},
)
print(output["outputs"][0]) # LoRA weights URL
Step 4: Deploy Your Adapter
Once training completes, you receive a .safetensors file compatible with LTX-2 inference endpoints. Apply your custom IC-LoRA to the LTX-2 19B Text-to-Video LoRA or LTX-2 19B Image-to-Video LoRA models for immediate use.
Transparent Pricing
Training costs scale proportionally with your configuration:
| Training Steps | Price |
|---|---|
| 100 steps | $0.75 |
| 500 steps | $3.75 |
| 1,000 steps | $7.50 |
| 2,000 steps | $15.00 |
At just $0.75 per 100 steps, you can experiment with different configurations affordably before committing to longer training runs.
Why Train on WaveSpeedAI?
- No Cold Starts: Your training jobs start immediately without waiting for GPU allocation
- No Infrastructure Management: Skip the complexity of provisioning 48GB+ VRAM machines
- Instant Deployment: Trained LoRAs deploy directly to WaveSpeedAI’s optimized inference endpoints
- Affordable Experimentation: Test multiple configurations and iterations without infrastructure lock-in
- Production-Ready Output: Export standard
.safetensorsfiles compatible with the broader LTX-2 ecosystem
Pro Tips for Better Results
- Dataset Diversity: Include varied examples covering different scenarios your adapter should handle
- Consistent Quality: Use high-resolution, well-paired training data—garbage in, garbage out applies here
- Start Small: Begin with 500 steps and default settings, then increase parameters if needed
- Unique Triggers: Choose trigger words that are distinctive and unlikely to appear in normal prompts
- Iterate Quickly: The affordable pricing lets you run multiple experiments to find optimal settings
Start Training Today
The LTX-2 19B IC-LoRA Trainer opens up a new dimension of control in AI video generation. Whether you’re building production pipelines for brand content, creating specialized control adapters for creative tools, or developing novel video transformation effects, this trainer gives you the power to customize LTX-2’s behavior for your exact needs.
Explore the LTX-2 19B IC-LoRA Trainer on WaveSpeedAI →
Ready to push the boundaries of what’s possible with AI video? Train your first custom IC-LoRA today and transform how you create video content.





