LoRA Training & Usage
Create and use LoRA models for personalized AI generation.
What is LoRA?
LoRA (Low-Rank Adaptation) allows you to fine-tune AI models with your own images, creating personalized styles or consistent characters.
Why Use LoRA?
| Use Case | Example |
|---|---|
| Consistent characters | Generate the same person/character across multiple images |
| Personal style | Apply your unique artistic style to any prompt |
| Brand assets | Generate on-brand product images or mascots |
| Face/likeness | Create images of a specific person (with consent) |
Benefits:
- No need to describe appearance in every prompt
- More consistent results than prompt-only generation
- Train once, use unlimited times
Training Your Own LoRA
Create personalized AI models without coding:
Step 1: Prepare Your Dataset
-
Collect 10-20 diverse images
- High quality, no watermarks or text overlays
- Multiple angles and poses of your subject
- Varied lighting and backgrounds
- Subject clearly visible in each image
- Consistent image dimensions recommended
-
Compress into a .zip file
- All images in one folder
- Supported formats: JPG, PNG, WebP
- Optionally include .txt files with captions (same filename as image)
Step 2: Configure Training
| Parameter | Description | Recommendation |
|---|---|---|
| Trigger Word | Unique identifier for your model | Use something unique like “p3r5on” |
| Steps | Training iterations | 500-2000 for most use cases |
| Learning Rate | Training speed | Default is usually optimal |
| LoRA Rank | Model complexity | Higher = more detail, slower |
Step 3: Train
- Go to a LoRA trainer model page (e.g., LoRA Trainer)
- Upload your dataset
- Configure parameters
- Click Run and wait for completion
Browse all LoRA trainers at wavespeed.ai/models — filter by “Training” category.
Note: Training time depends on dataset size and parameters. Typical estimates: ~8 minutes for 1000 steps, ~25 minutes for 3000 steps. If a timeout or system error occurs, refunds are processed automatically.
Step 4: Use Your LoRA
Once training completes:
- Copy the URL provided to use directly in API calls
- Download the
.safetensorsfile for local use - Upload to Hugging Face for permanent storage and easy sharing
Using Existing LoRA Models
Apply LoRA models from Hugging Face or Civitai in your generations.
Where to Find LoRA Models
| Platform | How to Reference | Where to Find Trigger Word |
|---|---|---|
| Hugging Face | owner/model-name or direct URL | Model Card section |
| Civitai | Direct URL only (copy from Files) | Details section |
Requirements
| Requirement | Details |
|---|---|
| Format | Only .safetensors files (not .zip, .GGUF, .PickleTensor) |
| Size | Typically under 2GB (usually a few hundred MB) |
| Compatibility | LoRA must match base model exactly (name, version, parameter size) |
API Example
curl --location --request POST 'https://api.wavespeed.ai/api/v3/wavespeed-ai/qwen-image/edit-plus-lora' \
--header 'Content-Type: application/json' \
--header 'Authorization: Bearer ${WAVESPEED_API_KEY}' \
--data-raw '{
"prompt": "Turn the camera to a close-up.",
"images": [
"https://example.com/your-image.png"
],
"loras": [
{
"path": "dx8152/Qwen-Edit-2509-Multiple-angles",
"scale": 1
}
],
"size": "1536*1536",
"output_format": "jpeg",
"seed": -1
}'Note: Parameters vary by model. Check the model’s README for all available options.
LoRA Parameters
| Parameter | Description | Range |
|---|---|---|
path | URL or Hugging Face path | - |
scale | Influence strength | 0.0 - 4.0 |
Tips
Trigger Word:
- The trigger word activates your LoRA’s trained style/subject
- Must include it in your prompt — without it, the LoRA may not take effect
- Example: If trigger word is “p3r5on”, your prompt should be “p3r5on walking in a garden”, not just “a person walking in a garden”
Scale:
- Controls how strongly the LoRA affects the output
- Too low (< 0.5): Effect barely visible
- Optimal (0.8-1.0): Balanced results
- Too high (> 1.5): Over-stylized, may cause distortion
Combining Multiple LoRAs
Some models support combining high-noise and low-noise LoRAs for more control:
curl --location --request POST 'https://api.wavespeed.ai/api/v3/wavespeed-ai/wan-2.2/i2v-720p-lora' \
--header 'Content-Type: application/json' \
--header 'Authorization: Bearer ${WAVESPEED_API_KEY}' \
--data-raw '{
"prompt": "A person walking through a garden",
"image": "https://example.com/your-image.jpg",
"duration": 5,
"high_noise_loras": [
{
"path": "owner/high-noise-lora",
"scale": 1
}
],
"low_noise_loras": [
{
"path": "owner/low-noise-lora-1",
"scale": 1
},
{
"path": "owner/low-noise-lora-2",
"scale": 1
}
],
"seed": -1
}'| Parameter | Description |
|---|---|
high_noise_loras | Applied during early generation steps (overall style/structure) |
low_noise_loras | Applied during later steps (fine details) |
loras | Standard LoRAs (used by most models) |
Tip: When combining LoRAs, start with lower scale values and adjust gradually.
Compatibility
LoRA must match the base model exactly:
- Same model name
- Same version
- Same parameter size
For example, a LoRA trained on FLUX Dev won’t work with FLUX Schnell. A LoRA for Wan 2.1 14B won’t work on Wan 2.1 5B.
How to check compatibility:
- On Hugging Face: Check the Model Card for “base model” info
- On Civitai: Check the “Base Model” field in Details section
Troubleshooting
| Problem | Cause | Solution |
|---|---|---|
| LoRA has no effect | Missing trigger word | Add the trigger word to your prompt |
| Effect too weak | Scale too low | Increase scale (try 1.0-1.2) |
| Output looks distorted | Scale too high | Lower scale (try 0.6-0.8) |
| Error: model not found | Wrong path format | Hugging Face: owner/name, Civitai: use full URL |
| Error: incompatible LoRA | Base model mismatch | Verify LoRA matches your base model exactly |
| File format error | Wrong file type | Only .safetensors files are supported |