Flux Dev LoRA Ultra Fast
Playground
Try it on WavespeedAI!Ultra-fast FLUX.1 [dev] endpoint with LoRA support for high-quality image generation, personalization, and brand/style adaptation. Ready-to-use REST inference API, best performance, no coldstarts, affordable pricing.
Features
FLUX Dev LoRA Ultra Fast — wavespeed-ai/flux-dev-lora-ultra-fast
FLUX Dev LoRA Ultra Fast is a high-speed image generation and editing model that supports applying up to three LoRAs in a single run. It’s built for rapid iteration: generate new images from a prompt, or guide results with an input image (image-to-image) and an optional mask for localized edits. With LoRA stacking, you can quickly dial in style, character identity, or product aesthetics while keeping latency and cost low.
Key capabilities
- Text-to-image generation with fast turnaround
- Image-to-image generation for guided edits and variations
- Masked editing (inpainting-style) for localized changes
- LoRA support: add up to 3 LoRAs per run
- Adjustable strength for how strongly the input image is preserved
- Custom output size, seed control, and standard image formats
Use cases
- Rapid style exploration using one or more LoRAs (photoreal, illustration, anime, etc.)
- Character consistency by stacking identity + style LoRAs
- Product mockups: swap materials/colors and keep composition stable
- Portrait retouching or wardrobe/background changes with a mask
- High-volume generation for ads, thumbnails, and creative testing
Pricing
| Output | Price |
|---|---|
| Per image | $0.006 |
Inputs
- prompt (required): what to generate or how to edit
- image (optional): source image for image-to-image
- mask_image (optional): edit region mask (for localized edits)
- loras (optional): list of LoRA items (up to 3)
Parameters
- strength: how much to follow the input image (higher = more change; lower = more preservation)
- loras[].path: LoRA identifier (owner/model) or a public .safetensors URL
- loras[].scale: LoRA weight (typical range ~0.6–1.2 depending on LoRA)
- width / height: output size
- num_inference_steps: sampling steps
- guidance_scale: prompt adherence strength
- num_images: number of images to generate per run
- seed: random seed (-1 for random; set for reproducible results)
- output_format: jpeg / png / webp, etc.
- enable_base64_output: return BASE64 instead of URL (API only)
Prompting tips
- If you’re stacking LoRAs, keep the base prompt clean and let LoRAs do the heavy lifting.
- For edits, describe the change explicitly and keep “what stays the same” implicit via the input image.
- For text rendering in images, keep on-image text short and specify placement/material.
Example prompts
- Super realism, ultra high-resolution photograph, cinematic lighting, shallow depth of field, shot on a Sony A7III, natural skin texture, sharp eyes, soft bokeh background.
- Edit: change the outfit to a light gray sweater and gold thin-rimmed glasses, keep the same face and hairstyle, realistic studio lighting.
Authentication
For authentication details, please refer to the Authentication Guide.
API Endpoints
Submit Task & Query Result
# Submit the task
curl --location --request POST "https://api.wavespeed.ai/api/v3/wavespeed-ai/flux-dev-lora-ultra-fast" \
--header "Content-Type: application/json" \
--header "Authorization: Bearer ${WAVESPEED_API_KEY}" \
--data-raw '{
"strength": 0.8,
"loras": [
{
"path": "strangerzonehf/Flux-Super-Realism-LoRA",
"scale": 1
}
],
"size": "1024*1024",
"num_inference_steps": 28,
"guidance_scale": 3.5,
"num_images": 1,
"seed": -1,
"output_format": "jpeg",
"enable_base64_output": false
}'
# Get the result
curl --location --request GET "https://api.wavespeed.ai/api/v3/predictions/${requestId}/result" \
--header "Authorization: Bearer ${WAVESPEED_API_KEY}"
Parameters
Task Submission Parameters
Request Parameters
| Parameter | Type | Required | Default | Range | Description |
|---|---|---|---|---|---|
| prompt | string | Yes | - | The positive prompt for the generation. | |
| image | string | No | - | ||
| mask_image | string | No | - | The mask image tells the model where to generate new pixels (white) and where to preserve the original image (black). It acts as a stencil or guide for targeted image editing. | |
| strength | number | No | 0.8 | 0.01 ~ 1.00 | Strength indicates extent to transform the reference image. |
| loras | array | No | max 3 items | List of LoRAs to apply (max 3). | |
| loras[].path | string | Yes | - | Path to the LoRA model | |
| loras[].scale | float | Yes | - | 0.0 ~ 4.0 | Scale of the LoRA model |
| size | string | No | 1024*1024 | 256 ~ 1536 per dimension | The size of the generated media in pixels (width*height). |
| num_inference_steps | integer | No | 28 | 1 ~ 50 | The number of inference steps to perform. |
| guidance_scale | number | No | 3.5 | 0.0 ~ 20.0 | The guidance scale to use for the generation. |
| num_images | integer | No | 1 | 1 ~ 4 | The number of images to generate. |
| seed | integer | No | -1 | -1 ~ 2147483647 | The random seed to use for the generation. -1 means a random seed will be used. |
| output_format | string | No | jpeg | jpeg, png, webp | The format of the output image. |
| enable_base64_output | boolean | No | false | - | If enabled, the output will be encoded into a BASE64 string instead of a URL. This property is only available through the API. |
Response Parameters
| Parameter | Type | Description |
|---|---|---|
| code | integer | HTTP status code (e.g., 200 for success) |
| message | string | Status message (e.g., “success”) |
| data.id | string | Unique identifier for the prediction, Task Id |
| data.model | string | Model ID used for the prediction |
| data.outputs | array | Array of URLs to the generated content (empty when status is not completed) |
| data.urls | object | Object containing related API endpoints |
| data.urls.get | string | URL to retrieve the prediction result |
| data.has_nsfw_contents | array | Array of boolean values indicating NSFW detection for each output |
| data.status | string | Status of the task: created, processing, completed, or failed |
| data.created_at | string | ISO timestamp of when the request was created (e.g., “2023-04-01T12:34:56.789Z”) |
| data.error | string | Error message (empty if no error occurred) |
| data.timings | object | Object containing timing details |
| data.timings.inference | integer | Inference time in milliseconds |
Result Request Parameters
| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
| id | string | Yes | - | Task ID |
Result Response Parameters
| Parameter | Type | Description |
|---|---|---|
| code | integer | HTTP status code (e.g., 200 for success) |
| message | string | Status message (e.g., “success”) |
| data | object | The prediction data object containing all details |
| data.id | string | Unique identifier for the prediction, the ID of the prediction to get |
| data.model | string | Model ID used for the prediction |
| data.outputs | string | Array of URLs to the generated content (empty when status is not completed). |
| data.urls | object | Object containing related API endpoints |
| data.urls.get | string | URL to retrieve the prediction result |
| data.status | string | Status of the task: created, processing, completed, or failed |
| data.created_at | string | ISO timestamp of when the request was created (e.g., “2023-04-01T12:34:56.789Z”) |
| data.error | string | Error message (empty if no error occurred) |
| data.timings | object | Object containing timing details |
| data.timings.inference | integer | Inference time in milliseconds |