Z Image Base LoRA Trainer
Playground
Try it on WavespeedAI!Z-Image Base LoRA Trainer – train custom image LoRA models from your own dataset, with zip uploads, auto-tuned defaults and fast iteration for brand, character or IP looks. Ready-to-use REST inference API, best performance, no cold starts, affordable pricing.
Features
Z-Image Base LoRA Trainer
Z-Image Base LoRA Trainer is a high-performance custom model training service for the Z-Image text-to-image generation model. It allows you to train lightweight LoRA (Low-Rank Adaptation) adapters for personalized styles, characters, and concepts — bringing your custom visuals into AI-generated images.
Why Choose This?
-
Efficient training Train custom adapters specifically optimized for Z-Image’s fast diffusion architecture.
-
Compact and portable Produces lightweight LoRA files that are easy to share and deploy.
-
Plug-and-play compatibility Trained LoRAs work directly with Z-Image Base LoRA and Z-Image Turbo LoRA models.
-
Preserves base model speed Your custom styles inherit Z-Image’s fast generation capabilities.
Training Process
-
Data Upload Prepare and upload a ZIP file containing your training images. Include 10-20 high-quality, diverse images for best results.
-
Configure Trigger Word Set a unique trigger word (e.g., “p3r5on”) that will activate your trained style or character in prompts.
-
Adjust Training Parameters
- steps — Total training iterations (default: 1000)
- learning_rate — Training speed (default: 0.0001)
- lora_rank — Adapter capacity (default: 16)
-
LoRA Training The system runs a tailored LoRA optimization loop:
- Freezes the base model weights
- Trains only the low-rank adapter layers
- Applies Z-Image optimized settings
-
Model Export After training completes, you receive a LoRA adapter file (.safetensors) compatible with:
Parameters
| Parameter | Required | Default | Description |
|---|---|---|---|
| data | Yes | — | ZIP file containing training images (min 4 images recommended) |
| trigger_word | No | p3r5on | Unique word to activate your trained concept |
| steps | No | 1000 | Number of training steps (500-10000) |
| learning_rate | No | 0.0001 | Training speed (lower = more stable) |
| lora_rank | No | 16 | Adapter capacity (1-64, higher = more detail) |
How to Use
- Prepare your images — collect 10-20 high-quality, diverse images of your subject.
- Create a ZIP file — package all images into a single ZIP archive.
- Upload your data — drag and drop or provide a public URL to your ZIP file.
- Set trigger word — choose a unique word that won’t conflict with common terms.
- Adjust parameters (optional) — modify steps, learning_rate, and lora_rank as needed.
- Run — submit and wait for training to complete.
- Download — receive your LoRA adapter file for use with Z-Image models.
Pricing
| Training Steps | Price (USD) |
|---|---|
| 1,000 | $1.25 |
| 2,000 | $2.50 |
| 5,000 | $6.25 |
| 10,000 | $12.50 |
Billing Rules
- Base price: $1.25 per 1,000 steps
- Total cost = $1.25 × (steps / 1,000)
Best Use Cases
- Character LoRAs — Train on character images to maintain identity across generations.
- Brand Styles — Create custom visual styles for consistent marketing imagery.
- Art Styles — Capture specific artistic aesthetics for creative projects.
- Product Photography — Train on product photos for consistent visual presentations.
Pro Tips
- Use 10-20 high-quality, diverse images of your subject for best results.
- Choose a unique trigger word that won’t conflict with common words (e.g., “m1style” instead of “style”).
- Higher lora_rank (32-64) captures more detail but increases training time and file size.
- Lower learning_rate (0.00005) is more stable but requires more steps.
- Start with default settings, then adjust if needed.
Try Your LoRA
After training, use your LoRA with these models:
- Z-Image Base LoRA — Base model with LoRA support.
- Z-Image Turbo LoRA — Fast generation with LoRA support.
Guidance
Notes
- Minimum recommended: 4 images, optimal: 10-20 images.
- Training time scales with the number of steps configured.
- Higher parameter values (steps, lora_rank) will increase training time and cost.
- For faster iterations, start with lower settings and increase gradually.
Authentication
For authentication details, please refer to the Authentication Guide.
API Endpoints
Submit Task & Query Result
# Submit the task
curl --location --request POST "https://api.wavespeed.ai/api/v3/wavespeed-ai/z-image/base-lora-trainer" \
--header "Content-Type: application/json" \
--header "Authorization: Bearer ${WAVESPEED_API_KEY}" \
--data-raw '{
"trigger_word": "p3r5on",
"steps": 1000,
"learning_rate": 0.0001,
"lora_rank": 16
}'
# Get the result
curl --location --request GET "https://api.wavespeed.ai/api/v3/predictions/${requestId}/result" \
--header "Authorization: Bearer ${WAVESPEED_API_KEY}"
Parameters
Task Submission Parameters
Request Parameters
| Parameter | Type | Required | Default | Range | Description |
|---|---|---|---|---|---|
| data | string | Yes | - | - | URL to zip archive with images. Try to use at least 4 images in general the more the better. In addition to images the archive can contain text files with captions. Each text file should have the same name as the image file it corresponds to. |
| trigger_word | string | No | p3r5on | - | Trigger word to be used in the captions. If None, a trigger word will not be used. If no captions are provided the trigger_word will be used instead of captions. |
| steps | integer | No | 1000 | 500 ~ 10000 | Number of steps to train the LoRA on. |
| learning_rate | number | No | 0.0001 | 0.00000 ~ 1.00000 | |
| lora_rank | integer | No | 16 | 1 ~ 64 |
Response Parameters
| Parameter | Type | Description |
|---|---|---|
| code | integer | HTTP status code (e.g., 200 for success) |
| message | string | Status message (e.g., “success”) |
| data.id | string | Unique identifier for the prediction, Task Id |
| data.model | string | Model ID used for the prediction |
| data.outputs | array | Array of URLs to the generated content (empty when status is not completed) |
| data.urls | object | Object containing related API endpoints |
| data.urls.get | string | URL to retrieve the prediction result |
| data.has_nsfw_contents | array | Array of boolean values indicating NSFW detection for each output |
| data.status | string | Status of the task: created, processing, completed, or failed |
| data.created_at | string | ISO timestamp of when the request was created (e.g., “2023-04-01T12:34:56.789Z”) |
| data.error | string | Error message (empty if no error occurred) |
| data.timings | object | Object containing timing details |
| data.timings.inference | integer | Inference time in milliseconds |
Result Request Parameters
| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
| id | string | Yes | - | Task ID |
Result Response Parameters
| Parameter | Type | Description |
|---|---|---|
| code | integer | HTTP status code (e.g., 200 for success) |
| message | string | Status message (e.g., “success”) |
| data | object | The prediction data object containing all details |
| data.id | string | Unique identifier for the prediction, the ID of the prediction to get |
| data.model | string | Model ID used for the prediction |
| data.outputs | string | Array of URLs to the generated content (empty when status is not completed). |
| data.urls | object | Object containing related API endpoints |
| data.urls.get | string | URL to retrieve the prediction result |
| data.status | string | Status of the task: created, processing, completed, or failed |
| data.created_at | string | ISO timestamp of when the request was created (e.g., “2023-04-01T12:34:56.789Z”) |
| data.error | string | Error message (empty if no error occurred) |
| data.timings | object | Object containing timing details |
| data.timings.inference | integer | Inference time in milliseconds |