LoRA Training & Usage

LoRA Training & Usage

Create and use LoRA models for personalized AI generation.

What is LoRA?

LoRA (Low-Rank Adaptation) allows you to fine-tune AI models with your own images, creating personalized styles or consistent characters.

Why Use LoRA?

Use CaseExample
Consistent charactersGenerate the same person/character across multiple images
Personal styleApply your unique artistic style to any prompt
Brand assetsGenerate on-brand product images or mascots
Face/likenessCreate images of a specific person (with consent)

Benefits:

  • No need to describe appearance in every prompt
  • More consistent results than prompt-only generation
  • Train once, use unlimited times

Training Your Own LoRA

Create personalized AI models without coding:

Step 1: Prepare Your Dataset

  1. Collect 10-20 diverse images

    • High quality, no watermarks or text overlays
    • Multiple angles and poses of your subject
    • Varied lighting and backgrounds
    • Subject clearly visible in each image
    • Consistent image dimensions recommended
  2. Compress into a .zip file

    • All images in one folder
    • Supported formats: JPG, PNG, WebP
    • Optionally include .txt files with captions (same filename as image)

Step 2: Configure Training

ParameterDescriptionRecommendation
Trigger WordUnique identifier for your modelUse something unique like “p3r5on”
StepsTraining iterations500-2000 for most use cases
Learning RateTraining speedDefault is usually optimal
LoRA RankModel complexityHigher = more detail, slower

Step 3: Train

  1. Go to a LoRA trainer model page (e.g., LoRA Trainer)
  2. Upload your dataset
  3. Configure parameters
  4. Click Run and wait for completion

Browse all LoRA trainers at wavespeed.ai/models — filter by “Training” category.

Note: Training time depends on dataset size and parameters. Typical estimates: ~8 minutes for 1000 steps, ~25 minutes for 3000 steps. If a timeout or system error occurs, refunds are processed automatically.

Step 4: Use Your LoRA

Once training completes:

  • Copy the URL provided to use directly in API calls
  • Download the .safetensors file for local use
  • Upload to Hugging Face for permanent storage and easy sharing

Using Existing LoRA Models

Apply LoRA models from Hugging Face or Civitai in your generations.

Where to Find LoRA Models

PlatformHow to ReferenceWhere to Find Trigger Word
Hugging Faceowner/model-name or direct URLModel Card section
CivitaiDirect URL only (copy from Files)Details section

Requirements

RequirementDetails
FormatOnly .safetensors files (not .zip, .GGUF, .PickleTensor)
SizeTypically under 2GB (usually a few hundred MB)
CompatibilityLoRA must match base model exactly (name, version, parameter size)

API Example

curl --location --request POST 'https://api.wavespeed.ai/api/v3/wavespeed-ai/qwen-image/edit-plus-lora' \
--header 'Content-Type: application/json' \
--header 'Authorization: Bearer ${WAVESPEED_API_KEY}' \
--data-raw '{
  "prompt": "Turn the camera to a close-up.",
  "images": [
    "https://example.com/your-image.png"
  ],
  "loras": [
    {
      "path": "dx8152/Qwen-Edit-2509-Multiple-angles",
      "scale": 1
    }
  ],
  "size": "1536*1536",
  "output_format": "jpeg",
  "seed": -1
}'

Note: Parameters vary by model. Check the model’s README for all available options.

LoRA Parameters

ParameterDescriptionRange
pathURL or Hugging Face path-
scaleInfluence strength0.0 - 4.0

Tips

Trigger Word:

  • The trigger word activates your LoRA’s trained style/subject
  • Must include it in your prompt — without it, the LoRA may not take effect
  • Example: If trigger word is “p3r5on”, your prompt should be “p3r5on walking in a garden”, not just “a person walking in a garden”

Scale:

  • Controls how strongly the LoRA affects the output
  • Too low (< 0.5): Effect barely visible
  • Optimal (0.8-1.0): Balanced results
  • Too high (> 1.5): Over-stylized, may cause distortion

Combining Multiple LoRAs

Some models support combining high-noise and low-noise LoRAs for more control:

curl --location --request POST 'https://api.wavespeed.ai/api/v3/wavespeed-ai/wan-2.2/i2v-720p-lora' \
--header 'Content-Type: application/json' \
--header 'Authorization: Bearer ${WAVESPEED_API_KEY}' \
--data-raw '{
  "prompt": "A person walking through a garden",
  "image": "https://example.com/your-image.jpg",
  "duration": 5,
  "high_noise_loras": [
    {
      "path": "owner/high-noise-lora",
      "scale": 1
    }
  ],
  "low_noise_loras": [
    {
      "path": "owner/low-noise-lora-1",
      "scale": 1
    },
    {
      "path": "owner/low-noise-lora-2",
      "scale": 1
    }
  ],
  "seed": -1
}'
ParameterDescription
high_noise_lorasApplied during early generation steps (overall style/structure)
low_noise_lorasApplied during later steps (fine details)
lorasStandard LoRAs (used by most models)

Tip: When combining LoRAs, start with lower scale values and adjust gradually.

Compatibility

LoRA must match the base model exactly:

  • Same model name
  • Same version
  • Same parameter size

For example, a LoRA trained on FLUX Dev won’t work with FLUX Schnell. A LoRA for Wan 2.1 14B won’t work on Wan 2.1 5B.

How to check compatibility:

  • On Hugging Face: Check the Model Card for “base model” info
  • On Civitai: Check the “Base Model” field in Details section

Troubleshooting

ProblemCauseSolution
LoRA has no effectMissing trigger wordAdd the trigger word to your prompt
Effect too weakScale too lowIncrease scale (try 1.0-1.2)
Output looks distortedScale too highLower scale (try 0.6-0.8)
Error: model not foundWrong path formatHugging Face: owner/name, Civitai: use full URL
Error: incompatible LoRABase model mismatchVerify LoRA matches your base model exactly
File format errorWrong file typeOnly .safetensors files are supported

Learn More

© 2025 WaveSpeedAI. All rights reserved.