Flux Kontext Dev LoRA Ultra Fast
Playground
Try it on WavespeedAI!Ultra-fast FLUX.1 Kontext [dev] endpoint with LoRA support for rapid image editing and brand/style adaptation using pre-trained LoRA. Ready-to-use REST inference API, best performance, no coldstarts, affordable pricing.
Features
FLUX Kontext Dev LoRA Ultra Fast — wavespeed-ai/flux-kontext-dev-lora-ultra-fast
FLUX Kontext Dev LoRA Ultra Fast is a low-latency image-to-image editing model that supports LoRA adapters directly in the request. Provide a source image plus a natural-language edit instruction, and optionally attach up to 3 LoRAs to steer style, identity consistency, or domain aesthetics—optimized for rapid iteration and production workflows.
Key capabilities
- Ultra-fast instruction-based image editing from a single input image
- LoRA-enabled inference: apply up to 3 LoRAs via input parameters
- Strong preservation when you explicitly state what must remain unchanged
- Great for iterative editing: quick refinements across multiple passes with minimal drift
Typical use cases
- Fast retouching + consistent styling using a “house look” LoRA
- Batch product edits (color variants, background swaps) with brand LoRAs
- Text edits on packaging/signage while keeping typography/perspective consistent
- Rapid A/B testing by switching LoRAs instead of rewriting prompts
Pricing
$0.025 per image.
Cost per run = num_images × $0.025 Example: num_images = 4 → $0.10
Inputs and outputs
Input:
- One source image (upload or public URL)
- One edit instruction (prompt)
- Optional: up to 3 LoRA items
Output:
- One or more edited images (controlled by num_images)
Parameters
Core:
- prompt: Edit instruction describing what to change and what to preserve
- image: Source image
- width / height: Output resolution
- num_inference_steps: More steps can improve fidelity but increases latency
- guidance_scale: Higher values follow the prompt more strongly; too high may over-edit
- num_images: Number of variations generated per run
- seed: Fixed value for reproducibility; -1 for random
- output_format: jpeg or png
- enable_base64_output: Return BASE64 instead of a URL (API only)
- enable_sync_mode: Wait for generation and return results directly (API only)
LoRA (up to 3 items):
-
loras: A list of LoRA entries (max 3)
- path: owner/model-name or a direct .safetensors URL
- scale: LoRA strength (start around 0.6–1.0 and adjust)
Prompting guide
Use a clear “preserve + edit + constraints” structure and let LoRAs control the look:
Template: Keep [what must stay]. Change [what to edit]. Ensure [constraints]. Apply LoRA style consistently without altering identity.
Example prompts
- Keep the person’s face, hairstyle, and pose unchanged. Replace the background with a clean studio backdrop. Match lighting direction and shadow softness.
- Keep the product shape and label layout unchanged. Replace only the label text with “WaveSpeedAI”, preserving font style, size, and perspective.
- Remove the background clutter and keep the main subject sharp. Preserve natural skin texture while reducing shine.
Best practices
- Start with one LoRA first; add a second/third only when needed.
- If the output is over-stylized, reduce LoRA scale and/or guidance_scale.
- For consistent batches, reuse the same LoRA set + scales and fix seed for comparisons.
- Match width/height to the original aspect ratio to avoid distortions.
Authentication
For authentication details, please refer to the Authentication Guide.
API Endpoints
Submit Task & Query Result
# Submit the task
curl --location --request POST "https://api.wavespeed.ai/api/v3/wavespeed-ai/flux-kontext-dev-lora-ultra-fast" \
--header "Content-Type: application/json" \
--header "Authorization: Bearer ${WAVESPEED_API_KEY}" \
--data-raw '{
"num_inference_steps": 28,
"guidance_scale": 2.5,
"num_images": 1,
"seed": -1,
"loras": [],
"output_format": "jpeg",
"enable_base64_output": false,
"enable_sync_mode": false
}'
# Get the result
curl --location --request GET "https://api.wavespeed.ai/api/v3/predictions/${requestId}/result" \
--header "Authorization: Bearer ${WAVESPEED_API_KEY}"
Parameters
Task Submission Parameters
Request Parameters
| Parameter | Type | Required | Default | Range | Description |
|---|---|---|---|---|---|
| prompt | string | Yes | - | The positive prompt for the generation. | |
| image | string | No | - | The image to generate an image from. | |
| size | string | No | - | 256 ~ 1536 per dimension | The size of the generated media in pixels (width*height). |
| num_inference_steps | integer | No | 28 | 1 ~ 50 | The number of inference steps to perform. |
| guidance_scale | number | No | 2.5 | 0.0 ~ 20.0 | The guidance scale to use for the generation. |
| num_images | integer | No | 1 | 1 ~ 4 | The number of images to generate. |
| seed | integer | No | -1 | -1 ~ 2147483647 | The random seed to use for the generation. -1 means a random seed will be used. |
| loras | array | No | max 3 items | List of LoRAs to apply (max 3). | |
| loras[].path | string | Yes | - | Path to the LoRA model | |
| loras[].scale | float | Yes | - | 0.0 ~ 4.0 | Scale of the LoRA model |
| output_format | string | No | jpeg | jpeg, png, webp | The format of the output image. |
| enable_base64_output | boolean | No | false | - | If enabled, the output will be encoded into a BASE64 string instead of a URL. This property is only available through the API. |
| enable_sync_mode | boolean | No | false | - | If set to true, the function will wait for the result to be generated and uploaded before returning the response. It allows you to get the result directly in the response. This property is only available through the API. |
Response Parameters
| Parameter | Type | Description |
|---|---|---|
| code | integer | HTTP status code (e.g., 200 for success) |
| message | string | Status message (e.g., “success”) |
| data.id | string | Unique identifier for the prediction, Task Id |
| data.model | string | Model ID used for the prediction |
| data.outputs | array | Array of URLs to the generated content (empty when status is not completed) |
| data.urls | object | Object containing related API endpoints |
| data.urls.get | string | URL to retrieve the prediction result |
| data.has_nsfw_contents | array | Array of boolean values indicating NSFW detection for each output |
| data.status | string | Status of the task: created, processing, completed, or failed |
| data.created_at | string | ISO timestamp of when the request was created (e.g., “2023-04-01T12:34:56.789Z”) |
| data.error | string | Error message (empty if no error occurred) |
| data.timings | object | Object containing timing details |
| data.timings.inference | integer | Inference time in milliseconds |
Result Request Parameters
| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
| id | string | Yes | - | Task ID |
Result Response Parameters
| Parameter | Type | Description |
|---|---|---|
| code | integer | HTTP status code (e.g., 200 for success) |
| message | string | Status message (e.g., “success”) |
| data | object | The prediction data object containing all details |
| data.id | string | Unique identifier for the prediction, the ID of the prediction to get |
| data.model | string | Model ID used for the prediction |
| data.outputs | string | Array of URLs to the generated content (empty when status is not completed). |
| data.urls | object | Object containing related API endpoints |
| data.urls.get | string | URL to retrieve the prediction result |
| data.status | string | Status of the task: created, processing, completed, or failed |
| data.created_at | string | ISO timestamp of when the request was created (e.g., “2023-04-01T12:34:56.789Z”) |
| data.error | string | Error message (empty if no error occurred) |
| data.timings | object | Object containing timing details |
| data.timings.inference | integer | Inference time in milliseconds |