Ltx 2 19b Ic LoRA Trainer
Playground
Try it on WavespeedAI!LTX-2 IC-LoRA Trainer lets you train custom In-Context LoRA models for video-to-video transformations, including depth/pose adapters, video restoration, and style transfer. Upload a ZIP file containing paired videos to start. Ready-to-use REST inference API, best performance, no coldstarts, affordable pricing.
Features
LTX-2 19B IC-LoRA Trainer
LTX-2 19B IC-LoRA Trainer is a high-performance custom model training service for the LTX-2 19B video generation model. It allows you to train lightweight LoRA (Low-Rank Adaptation) adapters for personalized styles, characters, and concepts — bringing your custom visuals into AI-generated videos with synchronized audio.
Why Choose This?
-
Video LoRA training Train custom adapters specifically optimized for LTX-2’s 19B DiT video architecture.
-
Character consistency Create LoRAs that maintain character identity across generated video clips.
-
Style personalization Capture specific artistic styles, brand aesthetics, or visual themes for video content.
-
Seamless integration Trained LoRAs work directly with LTX-2 Text-to-Video LoRA and Image-to-Video LoRA models.
Training Process
-
Data Upload Prepare and upload a ZIP file containing your training images. Include 10-20 high-quality, diverse images for best results.
-
Configure Trigger Word Set a unique trigger word (e.g., “p3r5on”) that will activate your trained style or character in prompts.
-
Adjust Training Parameters
- steps — Total training iterations (default: 500)
- learning_rate — Training speed (default: 0.0002)
- lora_rank — Adapter capacity (default: 32)
-
LoRA Training The system runs a tailored LoRA optimization loop:
- Freezes the base model weights
- Trains only the low-rank adapter layers
- Applies LTX-2 optimized settings for video generation
-
Model Export After training completes, you receive a LoRA adapter file (.safetensors) compatible with:
Parameters
| Parameter | Default | Description |
|---|---|---|
| data | — | ZIP file containing training images (required) |
| trigger_word | — | Unique word to activate your trained concept |
| steps | 500 | Total training iterations |
| learning_rate | 0.0002 | Training speed (lower = more stable, higher = faster) |
| lora_rank | 32 | Adapter capacity (higher = more detail, larger file) |
Pricing
| Training Steps | Price (USD) |
|---|---|
| 100 | $0.75 |
| 500 | $3.75 |
| 1,000 | $7.50 |
| 2,000 | $15.00 |
Billing Rules
- Base price: $0.75 per 100 steps
- Total cost = $0.75 × (steps / 100)
- Billed proportionally to the total number of steps in your job
Best Use Cases
- Character LoRAs — Train on character images to maintain identity across video generations.
- Brand Styles — Create custom visual styles for consistent marketing video content.
- Art Styles — Capture specific artistic aesthetics for creative video projects.
- Product Visualization — Train on product photos for consistent video presentations.
Pro Tips
- Use 10-20 high-quality, diverse images of your subject for best results.
- Choose a unique trigger word that won’t conflict with common words.
- Higher lora_rank (32-64) captures more detail but increases training time and file size.
- Lower learning_rate is more stable but requires more steps.
- Start with default settings, then adjust if needed.
Try More Trainers
- Qwen Image 2512 LoRA Trainer — Train LoRAs for Qwen Image text-to-image model.
- Z-Image LoRA Trainer — Train LoRAs for Z-Image models with Turbo-compatible optimization.
- Flux Dev LoRA Trainer — Train LoRAs for Flux Dev image generation model.
Guidance
Notes
- Higher parameter values (steps, lora_rank) will increase training time.
- Training time scales with the number of images and total steps configured.
- For faster iterations, start with lower settings and increase gradually.
Authentication
For authentication details, please refer to the Authentication Guide.
API Endpoints
Submit Task & Query Result
# Submit the task
curl --location --request POST "https://api.wavespeed.ai/api/v3/wavespeed-ai/ltx-2-19b/ic-lora-trainer" \
--header "Content-Type: application/json" \
--header "Authorization: Bearer ${WAVESPEED_API_KEY}" \
--data-raw '{
"trigger_word": "p3r5on",
"steps": 500,
"learning_rate": 0.0002,
"lora_rank": 32
}'
# Get the result
curl --location --request GET "https://api.wavespeed.ai/api/v3/predictions/${requestId}/result" \
--header "Authorization: Bearer ${WAVESPEED_API_KEY}"
Parameters
Task Submission Parameters
Request Parameters
| Parameter | Type | Required | Default | Range | Description |
|---|---|---|---|---|---|
| data | string | Yes | - | - | Upload a zip file containing paired reference and target videos for IC-LoRA training. Reference videos must be named with '_ref.mp4' suffix (e.g., 'video1_ref.mp4' pairs with 'video1.mp4'). Reference and target videos must have identical resolution and length. Each text file should have the same name as the video file it corresponds to for captions. |
| trigger_word | string | No | p3r5on | - | The phrase that will trigger the model to generate a video. |
| steps | integer | No | 500 | 100 ~ 20000 | Number of steps to train the LoRA on. |
| learning_rate | number | No | 0.0002 | 0.00000 ~ 1.00000 | |
| lora_rank | integer | No | 32 | 1 ~ 128 |
Response Parameters
| Parameter | Type | Description |
|---|---|---|
| code | integer | HTTP status code (e.g., 200 for success) |
| message | string | Status message (e.g., “success”) |
| data.id | string | Unique identifier for the prediction, Task Id |
| data.model | string | Model ID used for the prediction |
| data.outputs | array | Array of URLs to the generated content (empty when status is not completed) |
| data.urls | object | Object containing related API endpoints |
| data.urls.get | string | URL to retrieve the prediction result |
| data.has_nsfw_contents | array | Array of boolean values indicating NSFW detection for each output |
| data.status | string | Status of the task: created, processing, completed, or failed |
| data.created_at | string | ISO timestamp of when the request was created (e.g., “2023-04-01T12:34:56.789Z”) |
| data.error | string | Error message (empty if no error occurred) |
| data.timings | object | Object containing timing details |
| data.timings.inference | integer | Inference time in milliseconds |
Result Request Parameters
| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
| id | string | Yes | - | Task ID |
Result Response Parameters
| Parameter | Type | Description |
|---|---|---|
| code | integer | HTTP status code (e.g., 200 for success) |
| message | string | Status message (e.g., “success”) |
| data | object | The prediction data object containing all details |
| data.id | string | Unique identifier for the prediction, the ID of the prediction to get |
| data.model | string | Model ID used for the prediction |
| data.outputs | string | Array of URLs to the generated content (empty when status is not completed). |
| data.urls | object | Object containing related API endpoints |
| data.urls.get | string | URL to retrieve the prediction result |
| data.status | string | Status of the task: created, processing, completed, or failed |
| data.created_at | string | ISO timestamp of when the request was created (e.g., “2023-04-01T12:34:56.789Z”) |
| data.error | string | Error message (empty if no error occurred) |
| data.timings | object | Object containing timing details |
| data.timings.inference | integer | Inference time in milliseconds |