WaveSpeedAI APIWavespeed AIFlux 2 Dev Edit LoRA

Flux 2 Dev Edit LoRA

Flux 2 Dev Edit LoRA

Playground

Try it on WavespeedAI!

FLUX.2 [dev] Edit with LoRA support enables precise image-to-image editing with natural-language instructions, hex color control, and personalized styles via custom LoRA adapters. Extends FLUX.2 [dev] Edit with up to 4 LoRAs for consistent, brand-specific results. Ready-to-use REST inference API, best performance, no coldstarts, affordable pricing.

Features

FLUX.2 [dev] — Edit with LoRA

FLUX.2 [dev] Edit LoRA is a lightweight, LoRA-aware editing model built on FLUX.2 [dev]. It’s designed to take an existing image and apply personalised, structure-preserving edits using natural language prompts plus up to 4 custom LoRA adapters—ideal when you want fast, consistent updates to assets you already have.


Ideal use cases

  • Refreshing existing brand or product imagery with custom LoRAs
  • Keeping characters consistent across many edited shots
  • Rolling out seasonal or campaign variants from a stable asset library
  • Post-processing generations from FLUX.2 [dev] Text-to-Image
  • Teams that need low-cost, high-volume edits with style control

LoRA-guided editing on a compact engine

Starting from the same lean architecture as FLUX.2 [dev] Edit, Edit LoRA adds adapter hooks so your custom LoRAs can drive the look and feel of each edit. The base image anchors composition and identity, while LoRAs and prompts work together to adjust style, colours, and details—keeping edits fast and predictable even at scale.


Why teams pick this model

• Dev-class editing, upgraded with LoRAs

You keep the familiar behaviour of the original Edit model (local, prompt-based changes) and layer LoRAs on top for brand styles, art directions, or recurring characters—so everything stays on-model across campaigns.

• Multiple LoRAs in one pass

Attach up to four adapters at once, each with its own strength range (0–4). For example, combine a “character” LoRA, a “lighting/style” LoRA, and a “brand palette” LoRA while you update backgrounds, outfits, or props via text.

• Structure stays, style evolves

The model treats the input image as the anchor: faces, poses, and layout remain intact while textures, colours, and surface details are updated. That makes it ideal for catalogue refreshes and long-running series.

• Batch-friendly performance

Generate 1–4 edited variants per request using the same LoRA stack and prompt. This makes it easy to spin up A/B sets, platform variants, or bulk updates without manually tracking parameters per image.

• Open, integration-ready foundation

Built on the open FLUX.2 dev stack, so it plugs cleanly into your own LoRA training, storage, and deployment infrastructure, whether you manage LoRAs per client, per brand, or per project.

• Cost-efficient asset reuse

Because LoRAs are lightweight and edits are local, you can update large libraries of images at low cost—instead of regenerating everything from scratch or retraining a full model.


Pricing

Simple per-image billing:

  • $0.03 per edited image

FLUX.2 [dev] family on WaveSpeedAI


More LoRA-support image tools

  • qwen-image/edit-plus-lora – combines Qwen’s strong semantic understanding with LoRA-based style control for precise, localised edits that keep the overall composition intact.
  • FLUX Kontext LoRA – a FLUX.2 dev LoRA stack tuned for cleaner prompts, better context handling, and more coherent, production-friendly generations.
  • SDXL-LoRA – a collection of SDXL LoRAs covering many styles and subjects, ideal for fast visual customisation without full-model fine-tuning.

LoRA resources

  • Use your LoRA – how to plug existing LoRAs into WaveSpeedAI.
  • Train your LoRA – step-by-step guide to training LoRAs without writing code.

Authentication

For authentication details, please refer to the Authentication Guide.

API Endpoints

Submit Task & Query Result


# Submit the task
curl --location --request POST "https://api.wavespeed.ai/api/v3/wavespeed-ai/flux-2-dev/edit-lora" \
--header "Content-Type: application/json" \
--header "Authorization: Bearer ${WAVESPEED_API_KEY}" \
--data-raw '{
    "loras": [],
    "seed": -1,
    "enable_base64_output": false,
    "enable_sync_mode": false
}'

# Get the result
curl --location --request GET "https://api.wavespeed.ai/api/v3/predictions/${requestId}/result" \
--header "Authorization: Bearer ${WAVESPEED_API_KEY}"

Parameters

Task Submission Parameters

Request Parameters

ParameterTypeRequiredDefaultRangeDescription
promptstringYes-The prompt describing the desired edits to the image.
imagesarrayYes[]1 ~ 3 itemsList of URLs of input images for editing. The maximum number of images is 3.
lorasarrayNomax 4 itemsList of LoRAs to apply (max 4).
loras[].pathstringYes-Path to the LoRA model
loras[].scalefloatYes-0.0 ~ 4.0Scale of the LoRA model
sizestringNo-256 ~ 1536 per dimensionThe size of the generated media in pixels (width*height).
seedintegerNo-1-1 ~ 2147483647The random seed to use for the generation. -1 means a random seed will be used.
enable_base64_outputbooleanNofalse-If enabled, the output will be encoded into a BASE64 string instead of a URL. This property is only available through the API.
enable_sync_modebooleanNofalse-If set to true, the function will wait for the result to be generated and uploaded before returning the response. It allows you to get the result directly in the response. This property is only available through the API.

Response Parameters

ParameterTypeDescription
codeintegerHTTP status code (e.g., 200 for success)
messagestringStatus message (e.g., “success”)
data.idstringUnique identifier for the prediction, Task Id
data.modelstringModel ID used for the prediction
data.outputsarrayArray of URLs to the generated content (empty when status is not completed)
data.urlsobjectObject containing related API endpoints
data.urls.getstringURL to retrieve the prediction result
data.has_nsfw_contentsarrayArray of boolean values indicating NSFW detection for each output
data.statusstringStatus of the task: created, processing, completed, or failed
data.created_atstringISO timestamp of when the request was created (e.g., “2023-04-01T12:34:56.789Z”)
data.errorstringError message (empty if no error occurred)
data.timingsobjectObject containing timing details
data.timings.inferenceintegerInference time in milliseconds

Result Request Parameters

© 2025 WaveSpeedAI. All rights reserved.