Flux 2 Dev Text To Image LoRA
Playground
Try it on WavespeedAI!FLUX.2 [dev] with LoRA support delivers fast, studio-quality text-to-image generation with enhanced realism, crisper text rendering, and personalized styles via custom LoRA adapters. Extends FLUX.2 [dev] with up to 4 LoRAs for brand-specific outputs. Ready-to-use REST inference API, best performance, no cold starts, affordable pricing.
Features
FLUX.2 [dev] — Text-to-Image with LoRA
FLUX.2 [dev] with LoRA support is the personalised version of the lean FLUX.2 [dev] Text-to-Image model. It keeps the fast, stable base behaviour, while letting you plug in up to 4 custom LoRA adapters to inject your own styles, characters, or brand identity into each generation.
Good for
- Rapid prototyping with personalised visual styles
- Brand-specific image generation at scale
- Character-consistent content (mascots, VTubers, OCs, etc.)
- Custom training workflows based on LoRA fine-tuning
- Teams that need both speed and fine-grained style control
LoRA-powered personalisation on a lean base
FLUX.2 [dev] + LoRA starts from the same lightweight, production-friendly dev model and adds adapter hooks for your own LoRAs. You can mix several adapters in one request, control their strengths independently, and still get the fast turnaround and predictable behaviour that make dev a good “default” engine.
Why Choose this
• Familiar dev behaviour, with extra style control
You get the same quick, reliable generations as FLUX.2 [dev] Text-to-Image, plus the ability to load custom LoRAs for specific art styles, brand looks, or recurring characters.
• Up to 4 LoRAs in a single run
Attach as many as four adapters at once and give each one its own strength (0–4). Combine, for example, a character LoRA, a lighting/style LoRA, and a brand-colour LoRA in one prompt to keep everything consistent across outputs.
• Style-consistent batches
Generate 1–4 images per request with the same LoRA stack, making it easy to produce A/B variants, campaign sets, or social content packs that share a coherent visual identity.
• Open, transparent foundation
Built on the same open FLUX.2 stack as the base dev model, so integrating with your own LoRA training, management, and deployment tooling is straightforward.
• Cost-effective customisation
LoRA adapters add only a small overhead compared with full fine-tuning, which keeps per-image costs low even when you apply several custom styles.
Pricing
Simple per-image billing:
- $0.018 per generated image
FLUX.2 [dev] family on WaveSpeedAI
- FLUX.2 [dev] Text-to-Image – base model without LoRA for the fastest, most lightweight generation.
- FLUX.2 [dev] Edit – refine and update existing images with prompt-based edits.
- FLUX.2 [dev] Edit LoRA – image editing that also supports LoRA-based personalisation.
More LoRA Support Image Tools
- qwen-image/edit-plus-lora – combines Qwen’s strong semantic understanding with LoRA-based style control for precise, localised edits that still preserve overall composition.
- FLUX Kontext LoRA – a FLUX.2 dev LoRA stack optimised for cleaner prompts, better context handling, and more coherent, production-friendly generations.
- SDXL-LoRA – a collection of SDXL LoRAs offering a wide range of ready-made styles and subjects, ideal for fast customisation without full fine-tuning.
LoRA resources
- Use your LoRA – how to plug existing LoRAs into WaveSpeedAI.
- Train your LoRA – step-by-step guide to training LoRAs without writing code.
Authentication
For authentication details, please refer to the Authentication Guide.
API Endpoints
Submit Task & Query Result
# Submit the task
curl --location --request POST "https://api.wavespeed.ai/api/v3/wavespeed-ai/flux-2-dev/text-to-image-lora" \
--header "Content-Type: application/json" \
--header "Authorization: Bearer ${WAVESPEED_API_KEY}" \
--data-raw '{
"size": "1024*1024",
"loras": [],
"seed": -1,
"enable_base64_output": false,
"enable_sync_mode": false
}'
# Get the result
curl --location --request GET "https://api.wavespeed.ai/api/v3/predictions/${requestId}/result" \
--header "Authorization: Bearer ${WAVESPEED_API_KEY}"
Parameters
Task Submission Parameters
Request Parameters
| Parameter | Type | Required | Default | Range | Description |
|---|---|---|---|---|---|
| prompt | string | Yes | - | The positive prompt for the generation. | |
| size | string | No | 1024*1024 | 256 ~ 1536 per dimension | The size of the generated media in pixels (width*height). |
| loras | array | No | max 4 items | List of LoRAs to apply (max 4). | |
| loras[].path | string | Yes | - | Path to the LoRA model | |
| loras[].scale | float | Yes | - | 0.0 ~ 4.0 | Scale of the LoRA model |
| seed | integer | No | -1 | -1 ~ 2147483647 | The random seed to use for the generation. -1 means a random seed will be used. |
| enable_base64_output | boolean | No | false | - | If enabled, the output will be encoded into a BASE64 string instead of a URL. This property is only available through the API. |
| enable_sync_mode | boolean | No | false | - | If set to true, the function will wait for the result to be generated and uploaded before returning the response. It allows you to get the result directly in the response. This property is only available through the API. |
Response Parameters
| Parameter | Type | Description |
|---|---|---|
| code | integer | HTTP status code (e.g., 200 for success) |
| message | string | Status message (e.g., “success”) |
| data.id | string | Unique identifier for the prediction, Task Id |
| data.model | string | Model ID used for the prediction |
| data.outputs | array | Array of URLs to the generated content (empty when status is not completed) |
| data.urls | object | Object containing related API endpoints |
| data.urls.get | string | URL to retrieve the prediction result |
| data.has_nsfw_contents | array | Array of boolean values indicating NSFW detection for each output |
| data.status | string | Status of the task: created, processing, completed, or failed |
| data.created_at | string | ISO timestamp of when the request was created (e.g., “2023-04-01T12:34:56.789Z”) |
| data.error | string | Error message (empty if no error occurred) |
| data.timings | object | Object containing timing details |
| data.timings.inference | integer | Inference time in milliseconds |