Flux 2 Dev Edit LoRA
Playground
Try it on WavespeedAI!FLUX.2 [dev] Edit with LoRA support enables precise image-to-image editing with natural-language instructions, hex color control, and personalized styles via custom LoRA adapters. Extends FLUX.2 [dev] Edit with up to 4 LoRAs for consistent, brand-specific results. Ready-to-use REST inference API, best performance, no coldstarts, affordable pricing.
Features
FLUX.2 [dev] — Edit with LoRA
FLUX.2 [dev] Edit LoRA is a lightweight, LoRA-aware editing model built on FLUX.2 [dev]. It’s designed to take an existing image and apply personalised, structure-preserving edits using natural language prompts plus up to 4 custom LoRA adapters—ideal when you want fast, consistent updates to assets you already have.
Ideal use cases
- Refreshing existing brand or product imagery with custom LoRAs
- Keeping characters consistent across many edited shots
- Rolling out seasonal or campaign variants from a stable asset library
- Post-processing generations from FLUX.2 [dev] Text-to-Image
- Teams that need low-cost, high-volume edits with style control
LoRA-guided editing on a compact engine
Starting from the same lean architecture as FLUX.2 [dev] Edit, Edit LoRA adds adapter hooks so your custom LoRAs can drive the look and feel of each edit. The base image anchors composition and identity, while LoRAs and prompts work together to adjust style, colours, and details—keeping edits fast and predictable even at scale.
Why teams pick this model
• Dev-class editing, upgraded with LoRAs
You keep the familiar behaviour of the original Edit model (local, prompt-based changes) and layer LoRAs on top for brand styles, art directions, or recurring characters—so everything stays on-model across campaigns.
• Multiple LoRAs in one pass
Attach up to four adapters at once, each with its own strength range (0–4). For example, combine a “character” LoRA, a “lighting/style” LoRA, and a “brand palette” LoRA while you update backgrounds, outfits, or props via text.
• Structure stays, style evolves
The model treats the input image as the anchor: faces, poses, and layout remain intact while textures, colours, and surface details are updated. That makes it ideal for catalogue refreshes and long-running series.
• Batch-friendly performance
Generate 1–4 edited variants per request using the same LoRA stack and prompt. This makes it easy to spin up A/B sets, platform variants, or bulk updates without manually tracking parameters per image.
• Open, integration-ready foundation
Built on the open FLUX.2 dev stack, so it plugs cleanly into your own LoRA training, storage, and deployment infrastructure, whether you manage LoRAs per client, per brand, or per project.
• Cost-efficient asset reuse
Because LoRAs are lightweight and edits are local, you can update large libraries of images at low cost—instead of regenerating everything from scratch or retraining a full model.
Pricing
Simple per-image billing:
- $0.03 per edited image
FLUX.2 [dev] family on WaveSpeedAI
- FLUX.2 [dev] Text-to-Image – base model without LoRA for the fastest, most lightweight generation.
- FLUX.2 [dev] Text-to-Image with LoRA – personalised generation with up to 4 LoRAs per prompt.
- FLUX.2 [dev] Edit – prompt-based editing of existing images using the plain dev backbone.
- FLUX.2 [dev] Edit with LoRA – this model, for LoRA-powered, structure-preserving edits on existing assets.
More LoRA-support image tools
- qwen-image/edit-plus-lora – combines Qwen’s strong semantic understanding with LoRA-based style control for precise, localised edits that keep the overall composition intact.
- FLUX Kontext LoRA – a FLUX.2 dev LoRA stack tuned for cleaner prompts, better context handling, and more coherent, production-friendly generations.
- SDXL-LoRA – a collection of SDXL LoRAs covering many styles and subjects, ideal for fast visual customisation without full-model fine-tuning.
LoRA resources
- Use your LoRA – how to plug existing LoRAs into WaveSpeedAI.
- Train your LoRA – step-by-step guide to training LoRAs without writing code.
Authentication
For authentication details, please refer to the Authentication Guide.
API Endpoints
Submit Task & Query Result
# Submit the task
curl --location --request POST "https://api.wavespeed.ai/api/v3/wavespeed-ai/flux-2-dev/edit-lora" \
--header "Content-Type: application/json" \
--header "Authorization: Bearer ${WAVESPEED_API_KEY}" \
--data-raw '{
"loras": [],
"seed": -1,
"enable_base64_output": false,
"enable_sync_mode": false
}'
# Get the result
curl --location --request GET "https://api.wavespeed.ai/api/v3/predictions/${requestId}/result" \
--header "Authorization: Bearer ${WAVESPEED_API_KEY}"
Parameters
Task Submission Parameters
Request Parameters
| Parameter | Type | Required | Default | Range | Description |
|---|---|---|---|---|---|
| prompt | string | Yes | - | The prompt describing the desired edits to the image. | |
| images | array | Yes | [] | 1 ~ 3 items | List of URLs of input images for editing. The maximum number of images is 3. |
| loras | array | No | max 4 items | List of LoRAs to apply (max 4). | |
| loras[].path | string | Yes | - | Path to the LoRA model | |
| loras[].scale | float | Yes | - | 0.0 ~ 4.0 | Scale of the LoRA model |
| size | string | No | - | 256 ~ 1536 per dimension | The size of the generated media in pixels (width*height). |
| seed | integer | No | -1 | -1 ~ 2147483647 | The random seed to use for the generation. -1 means a random seed will be used. |
| enable_base64_output | boolean | No | false | - | If enabled, the output will be encoded into a BASE64 string instead of a URL. This property is only available through the API. |
| enable_sync_mode | boolean | No | false | - | If set to true, the function will wait for the result to be generated and uploaded before returning the response. It allows you to get the result directly in the response. This property is only available through the API. |
Response Parameters
| Parameter | Type | Description |
|---|---|---|
| code | integer | HTTP status code (e.g., 200 for success) |
| message | string | Status message (e.g., “success”) |
| data.id | string | Unique identifier for the prediction, Task Id |
| data.model | string | Model ID used for the prediction |
| data.outputs | array | Array of URLs to the generated content (empty when status is not completed) |
| data.urls | object | Object containing related API endpoints |
| data.urls.get | string | URL to retrieve the prediction result |
| data.has_nsfw_contents | array | Array of boolean values indicating NSFW detection for each output |
| data.status | string | Status of the task: created, processing, completed, or failed |
| data.created_at | string | ISO timestamp of when the request was created (e.g., “2023-04-01T12:34:56.789Z”) |
| data.error | string | Error message (empty if no error occurred) |
| data.timings | object | Object containing timing details |
| data.timings.inference | integer | Inference time in milliseconds |