Flux 2 Max Edit
Playground
Try it on WavespeedAI!FLUX.2 [max] Edit delivers production-grade image-to-image editing from Black Forest Labs—apply natural-language instructions and exact hex color control for consistent, studio-quality results. Ready-to-use REST inference API, best performance, no cold starts, affordable pricing.
Features
FLUX.2 [max] — Image-to-Image Edit
FLUX.2 [max] Edit is the top-tier image editing engine of the FLUX family, built for production teams who need reliable, high-fidelity edits from natural language instructions. It can take one or several reference images plus a prompt and turn them into clean, on-brand transformations suitable for final delivery—not just drafts.
Suitable for
- Campaign key visuals and hero images
- Brand-accurate product and packaging refreshes
- E-commerce and marketing asset pipelines
- Automated, high-value editing jobs where failure is expensive
Production-focused editing model
Rather than exposing a long list of inference knobs, FLUX.2 [pro] Edit is tuned to behave the same way every time: you pass in image(s) and a prompt, it returns a polished edit. Internal settings are fixed for production use, so non-experts and API-driven workflows both get stable behaviour without babysitting parameters.
Practical advantages
Multi-reference aware edits
Use several input images in the same request when you need to match outfits, backgrounds, or stylistic cues—pro understands how they relate and applies changes accordingly.
Plain-language control
Describe edits in normal English (“make the background a clean studio grey and match the logo colour to our brand red”) instead of painting masks, cutting layers, or writing complex configs. Hex colours for brand palettes are supported when you need exact matches.
Structure-preserving transformations
Refines lighting, texture, and local detail while keeping composition, perspective, and identity intact, so the result looks like a carefully shot original rather than an obviously patched image.
Minimal setup, maximum throughput
No guidance scales, schedulers, or step counts to tune—just prompt-to-edit. That makes it easy to plug into batch jobs, web backends, or no-code tools without a separate “parameter tuning” phase.
Consistent behaviour at scale
A fixed optimisation profile plus seed control mean that large edit batches behave predictably, which is crucial for A/B testing, QA, and evergreen production flows.
Outputs that drop into your pipeline
Exports standard PNG or JPEG so edited assets can go straight into design tools, websites, print workflows, or further post-maxduction with no extra conversion steps.
Pricing
- $0.03 per edited image
FLUX.2 family on WaveSpeedAI
Use FLUX.2 [pro] Edit alongside the rest of the FLUX.2 lineup for a complete generate-and-edit stack:
- FLUX.2 [dev] Text-to-Image – lightweight base model optimised for speed and LoRA training.
- FLUX.2 [dev] Edit – fast, style-consistent edits on existing images with a lean architecture.
- FLUX.2 [flex] Text-to-Image – versatile, style-rich generation with broader aesthetics at high speed.
- FLUX.2 [flex] Edit – precise, controllable, and colour-accurate edits on existing images.
- FLUX.2 [pro] Edit – this model, for high-fidelity, production-grade image editing across your most important assets.
More Image Tools on WaveSpeedAI
- Nano Banana Pro – Google’s Gemini-based text-to-image model for sharp, coherent, prompt-faithful visuals that work great for ads, keyframes, and product shots.
- Seedream V4 – ByteDance’s style-consistent, multi-image generator ideal for posters, campaigns, and large batches of on-brand illustrations.
- Qwen Edit Plus – an enhanced Qwen-based image editor for precise inpainting, cleanup, and local style changes while preserving overall composition.
Authentication
For authentication details, please refer to the Authentication Guide.
API Endpoints
Submit Task & Query Result
# Submit the task
curl --location --request POST "https://api.wavespeed.ai/api/v3/wavespeed-ai/flux-2-max/edit" \
--header "Content-Type: application/json" \
--header "Authorization: Bearer ${WAVESPEED_API_KEY}" \
--data-raw '{
"seed": -1,
"enable_sync_mode": false,
"enable_base64_output": false
}'
# Get the result
curl --location --request GET "https://api.wavespeed.ai/api/v3/predictions/${requestId}/result" \
--header "Authorization: Bearer ${WAVESPEED_API_KEY}"
Parameters
Task Submission Parameters
Request Parameters
| Parameter | Type | Required | Default | Range | Description |
|---|---|---|---|---|---|
| prompt | string | Yes | - | The positive prompt for the generation. | |
| images | array | Yes | [] | 1 ~ 3 items | List of URLs of input images for editing. The maximum number of images is 3. |
| size | string | No | - | 256 ~ 1536 per dimension | The size of the generated media in pixels (width*height). |
| seed | integer | No | -1 | -1 ~ 2147483647 | The random seed to use for the generation. -1 means a random seed will be used. |
| enable_sync_mode | boolean | No | false | - | If set to true, the function will wait for the result to be generated and uploaded before returning the response. It allows you to get the result directly in the response. This property is only available through the API. |
| enable_base64_output | boolean | No | false | - | If enabled, the output will be encoded into a BASE64 string instead of a URL. This property is only available through the API. |
Response Parameters
| Parameter | Type | Description |
|---|---|---|
| code | integer | HTTP status code (e.g., 200 for success) |
| message | string | Status message (e.g., “success”) |
| data.id | string | Unique identifier for the prediction, Task Id |
| data.model | string | Model ID used for the prediction |
| data.outputs | array | Array of URLs to the generated content (empty when status is not completed) |
| data.urls | object | Object containing related API endpoints |
| data.urls.get | string | URL to retrieve the prediction result |
| data.has_nsfw_contents | array | Array of boolean values indicating NSFW detection for each output |
| data.status | string | Status of the task: created, processing, completed, or failed |
| data.created_at | string | ISO timestamp of when the request was created (e.g., “2023-04-01T12:34:56.789Z”) |
| data.error | string | Error message (empty if no error occurred) |
| data.timings | object | Object containing timing details |
| data.timings.inference | integer | Inference time in milliseconds |
Result Request Parameters
| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
| id | string | Yes | - | Task ID |
Result Response Parameters
| Parameter | Type | Description |
|---|---|---|
| code | integer | HTTP status code (e.g., 200 for success) |
| message | string | Status message (e.g., “success”) |
| data | object | The prediction data object containing all details |
| data.id | string | Unique identifier for the prediction, the ID of the prediction to get |
| data.model | string | Model ID used for the prediction |
| data.outputs | string | Array of URLs to the generated content (empty when status is not completed). |
| data.urls | object | Object containing related API endpoints |
| data.urls.get | string | URL to retrieve the prediction result |
| data.status | string | Status of the task: created, processing, completed, or failed |
| data.created_at | string | ISO timestamp of when the request was created (e.g., “2023-04-01T12:34:56.789Z”) |
| data.error | string | Error message (empty if no error occurred) |
| data.timings | object | Object containing timing details |
| data.timings.inference | integer | Inference time in milliseconds |