Qwen Image Edit 2511 LoRA
Playground
Try it on WavespeedAI!Qwen Image Edit 2511 LoRA is an enhanced version with custom LoRA support for personalized styles. It delivers stronger edit consistency, robust multi-person identity/pose consistency, custom LoRA styles, enhanced industrial/product design, and improved geometric reasoning for structure-preserving edits. Built for stable production use with a ready-to-use REST API, no cold starts, and predictable pricing.
Features
Qwen-Image-Edit-2511-LoRA (20B, MMDiT)
Qwen-Image-Edit-2511-LoRA is an enhanced version of Qwen-Image-Edit-2511 with custom LoRA support, enabling personalized style transfer and character-consistent editing. Built on the Qwen-Image 20B (MMDiT) architecture, it delivers all the benefits of 2511 plus the flexibility to apply custom-trained LoRA models for unique artistic styles, brand consistency, or character preservation.
What’s new in 2511-LoRA
-
Custom LoRA support Apply your own trained LoRA models or community LoRAs for personalized styles, characters, or brand aesthetics.
-
Multi-LoRA blending Combine multiple LoRAs with individual weight control for complex style combinations.
-
All 2511 improvements included
- Stronger multi-person consistency
- Better industrial & product editing
- Reduced drift across edits
- Improved geometric reasoning
Core capabilities
-
Custom style transfer Apply trained LoRA models to maintain consistent artistic style, character appearance, or brand identity across edits.
-
Dual-mode editing
- Appearance editing: add/remove/modify elements while keeping other regions visually consistent.
- Semantic editing: global style/pose/scene transformations that preserve intent while allowing broader pixel changes.
-
Precise text editing (when applicable) Add, delete, or replace on-image text while keeping natural typography behavior (spacing, alignment, style).
-
Style preservation Maintains lighting, palette, and overall look while applying targeted changes.
Best for
- Character-consistent projects — maintain character appearance across multiple edits
- Brand & marketing — apply brand-specific styles consistently
- Artistic workflows — use custom artistic styles with LoRA models
- Multi-person projects — group photos, team portraits, event shots
- Industrial & product design — product mockups with custom brand styles
- Identity-preserving edits — portraits, characters, avatar refinement with style control
Example prompts
- Custom style: Add a sunset background while maintaining the anime style from my LoRA.
- Multi-person: Add a third person matching the existing lighting and apply my character LoRA.
- Product design: Convert this product to match my brand style guide (using brand LoRA).
- Character consistency: Keep the character’s appearance from my trained LoRA and change the background to a futuristic city.
Parameters
| Parameter | Description |
|---|---|
| prompt* | The edit instruction describing what to change and what to keep. |
| images* | Input images to edit or reference. Up to 3 images maximum (the first image is typically treated as the main base image). |
| loras | Array of LoRA models to apply. Each LoRA object contains path (URL/path to LoRA file) and optional scale (weight 0-2, default 1.0). |
How to use
- Add your base image as the first item in images (you should see a preview in the UI).
- Optionally add 1–2 more reference images (maximum 3 total) to guide style, subject details, or composition.
- (Optional) Add LoRA models by providing the path/URL and weight for each LoRA you want to apply.
- Write a clear prompt describing the edit and constraints (examples: ‘keep face unchanged’, ‘keep pose’, ‘keep background’).
- Run the model and review the result.
- Iterate by adjusting LoRA weights or tightening constraints for best consistency.
Supported output formats typically include JPG / PNG / WEBP (as exposed by the endpoint).
LoRA Usage Tips
- Start with lower weights (0.5-0.8) and increase gradually for subtle style application
- Combine multiple LoRAs for complex effects, but keep total weight under 2.0 for stability
- Use publicly accessible URLs for LoRA files (or platform-supported paths)
- Test LoRAs individually before combining to understand their effects
Pricing
- $0.025 per edited image (with LoRA support)
Note
If you’re using image URLs or LoRA URLs (instead of uploading locally), make sure they’re publicly accessible. If the URL is valid, the interface will display a preview before you run the job.
Related Models
-
Qwen Image Edit 2511 — Multi-Person Image Editing — Base version without LoRA support, optimized for fast multi-person and industrial editing.
-
Qwen Image Edit — AI Image Editing & Inpainting — Prompt-driven image editing for object removal, background replacement, and inpainting with fast iterations and strong instruction following.
-
Qwen Image Edit Plus — High-Fidelity Image Editing — Higher-quality image edits with cleaner edges, improved detail retention, and more stable results on complex scenes and textures.
Reference
Authentication
For authentication details, please refer to the Authentication Guide.
API Endpoints
Submit Task & Query Result
# Submit the task
curl --location --request POST "https://api.wavespeed.ai/api/v3/wavespeed-ai/qwen-image/edit-2511-lora" \
--header "Content-Type: application/json" \
--header "Authorization: Bearer ${WAVESPEED_API_KEY}" \
--data-raw '{
"seed": -1,
"output_format": "jpeg",
"enable_base64_output": false,
"enable_sync_mode": false
}'
# Get the result
curl --location --request GET "https://api.wavespeed.ai/api/v3/predictions/${requestId}/result" \
--header "Authorization: Bearer ${WAVESPEED_API_KEY}"
Parameters
Task Submission Parameters
Request Parameters
| Parameter | Type | Required | Default | Range | Description |
|---|---|---|---|---|---|
| prompt | string | Yes | - | The positive prompt for the generation. | |
| images | array | Yes | [] | 1 ~ 3 items | The images to edit. A maximum of 3 reference images can be uploaded. |
| loras | array | No | max undefined items | Array of LoRA models to apply. Each LoRA can have a custom scale/weight. | |
| loras[].path | string | Yes | - | Path to the LoRA model | |
| loras[].scale | float | Yes | - | 0.0 ~ 4.0 | Scale of the LoRA model |
| size | string | No | - | 256 ~ 1536 per dimension | The size of the generated media in pixels (width*height). |
| seed | integer | No | -1 | -1 ~ 2147483647 | The random seed to use for the generation. -1 means a random seed will be used. |
| output_format | string | No | jpeg | jpeg, png, webp | The format of the output image. |
| enable_base64_output | boolean | No | false | - | If enabled, the output will be encoded into a BASE64 string instead of a URL. This property is only available through the API. |
| enable_sync_mode | boolean | No | false | - | If set to true, the function will wait for the result to be generated and uploaded before returning the response. It allows you to get the result directly in the response. This property is only available through the API. |
Response Parameters
| Parameter | Type | Description |
|---|---|---|
| code | integer | HTTP status code (e.g., 200 for success) |
| message | string | Status message (e.g., “success”) |
| data.id | string | Unique identifier for the prediction, Task Id |
| data.model | string | Model ID used for the prediction |
| data.outputs | array | Array of URLs to the generated content (empty when status is not completed) |
| data.urls | object | Object containing related API endpoints |
| data.urls.get | string | URL to retrieve the prediction result |
| data.has_nsfw_contents | array | Array of boolean values indicating NSFW detection for each output |
| data.status | string | Status of the task: created, processing, completed, or failed |
| data.created_at | string | ISO timestamp of when the request was created (e.g., “2023-04-01T12:34:56.789Z”) |
| data.error | string | Error message (empty if no error occurred) |
| data.timings | object | Object containing timing details |
| data.timings.inference | integer | Inference time in milliseconds |
Result Request Parameters
| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
| id | string | Yes | - | Task ID |
Result Response Parameters
| Parameter | Type | Description |
|---|---|---|
| code | integer | HTTP status code (e.g., 200 for success) |
| message | string | Status message (e.g., “success”) |
| data | object | The prediction data object containing all details |
| data.id | string | Unique identifier for the prediction, the ID of the prediction to get |
| data.model | string | Model ID used for the prediction |
| data.outputs | string | Array of URLs to the generated content (empty when status is not completed). |
| data.urls | object | Object containing related API endpoints |
| data.urls.get | string | URL to retrieve the prediction result |
| data.status | string | Status of the task: created, processing, completed, or failed |
| data.created_at | string | ISO timestamp of when the request was created (e.g., “2023-04-01T12:34:56.789Z”) |
| data.error | string | Error message (empty if no error occurred) |
| data.timings | object | Object containing timing details |
| data.timings.inference | integer | Inference time in milliseconds |