Browse ModelsWavespeed AIFlux Kontext Dev Multi

Flux Kontext Dev Multi

Flux Kontext Dev Multi

Playground

Try it on WavespeedAI!

Experimental FLUX.1 Kontext [dev] with multi-image handling for contextual multi-input inference and image workflows. Ready-to-use REST inference API, best performance, no coldstarts, affordable pricing.

Features

FLUX Kontext Dev Multi — wavespeed-ai/flux-kontext-dev/multi

FLUX.1 Kontext Dev Multi extends instruction-based image editing to a multi-image workflow. You can provide up to 4 reference images alongside a text instruction, enabling richer context, stronger consistency, and more controllable edits across subjects, styles, and scenes—especially useful when one image alone is not enough to describe what you want.

Key capabilities

  • Multi-image contextual editing with up to 4 reference images
  • Better subject/style consistency by grounding edits in multiple references
  • Supports both local edits (specific changes) and global edits (overall look)
  • Ideal for iterative workflows: refine results step-by-step while keeping identity and style stable

Typical use cases

  • Multi-reference character consistency (face/hair/outfit cues from multiple photos)
  • Product edits with reference packs (angle, material, branding consistency)
  • Style guidance from multiple exemplars (illustration style + lighting reference + texture reference)
  • Scene recomposition while preserving subject identity
  • Branding/text edits that must match reference typography and layout

Pricing

$0.03 per generation.

If you generate multiple outputs in one run, total cost = num_images × $0.03 Example: num_images = 4 → $0.12

Inputs and outputs

Input:

  • Up to 4 reference images (upload or public URLs)
  • One edit instruction (prompt)

Output:

  • One or more edited images (controlled by num_images)

Parameters

  • prompt: Edit instruction describing what to change and what to keep
  • images: Up to 4 reference images
  • width / height: Output resolution
  • num_inference_steps: More steps can improve fidelity but increases latency
  • guidance_scale: Higher values follow the prompt more strongly; too high may over-edit
  • num_images: Number of variations generated per run
  • seed: Fixed value for reproducibility; -1 for random
  • output_format: jpeg or png
  • enable_base64_output: Return BASE64 instead of a URL (API only)
  • enable_sync_mode: Wait for generation and return results directly (API only)

Prompting guide

For multi-reference runs, be explicit about how each reference should be used:

Template: Use reference 1 for [identity]. Use reference 2 for [outfit/material]. Use reference 3 for [style/lighting]. Use reference 4 for [background/scene]. Keep [must-preserve]. Change [edit request]. Match [lighting/shadows/perspective].

Example prompts

  • Use reference 1 for face identity and reference 2 for hairstyle. Keep the pose from the base image. Replace the background with a modern office and match lighting direction.
  • Use reference 1 for the product shape and reference 2 for label design. Replace the label text with “WaveSpeedAI”, keeping typography, perspective, and print texture consistent.
  • Use reference 3 as the style guide (soft illustration look) and reference 4 for lighting mood (golden hour). Preserve the subject identity from reference 1.

Best practices

  • Provide clean references: sharp subjects, consistent lighting, minimal occlusion.
  • Assign roles to references (identity vs. style vs. scene) to avoid conflicting signals.
  • Make one change per run, then iterate for tighter control.
  • Fix seed when you need stable comparisons across prompt variants.

Authentication

For authentication details, please refer to the Authentication Guide.

API Endpoints

Submit Task & Query Result


# Submit the task
curl --location --request POST "https://api.wavespeed.ai/api/v3/wavespeed-ai/flux-kontext-dev/multi" \
--header "Content-Type: application/json" \
--header "Authorization: Bearer ${WAVESPEED_API_KEY}" \
--data-raw '{
    "num_inference_steps": 28,
    "guidance_scale": 2.5,
    "num_images": 1,
    "seed": -1,
    "output_format": "jpeg",
    "enable_base64_output": false,
    "enable_sync_mode": false
}'

# Get the result
curl --location --request GET "https://api.wavespeed.ai/api/v3/predictions/${requestId}/result" \
--header "Authorization: Bearer ${WAVESPEED_API_KEY}"

Parameters

Task Submission Parameters

Request Parameters

ParameterTypeRequiredDefaultRangeDescription
promptstringYes-The positive prompt for the generation.
imagesarrayNo[]-URL of images to use while generating the image.
sizestringNo-256 ~ 1536 per dimensionThe size of the generated media in pixels (width*height).
num_inference_stepsintegerNo281 ~ 50The number of inference steps to perform.
guidance_scalenumberNo2.51.0 ~ 20.0The guidance scale to use for the generation.
num_imagesintegerNo11 ~ 4The number of images to generate.
seedintegerNo-1-1 ~ 2147483647The random seed to use for the generation. -1 means a random seed will be used.
output_formatstringNojpegjpeg, png, webpThe format of the output image.
enable_base64_outputbooleanNofalse-If enabled, the output will be encoded into a BASE64 string instead of a URL. This property is only available through the API.
enable_sync_modebooleanNofalse-If set to true, the function will wait for the result to be generated and uploaded before returning the response. It allows you to get the result directly in the response. This property is only available through the API.

Response Parameters

ParameterTypeDescription
codeintegerHTTP status code (e.g., 200 for success)
messagestringStatus message (e.g., “success”)
data.idstringUnique identifier for the prediction, Task Id
data.modelstringModel ID used for the prediction
data.outputsarrayArray of URLs to the generated content (empty when status is not completed)
data.urlsobjectObject containing related API endpoints
data.urls.getstringURL to retrieve the prediction result
data.has_nsfw_contentsarrayArray of boolean values indicating NSFW detection for each output
data.statusstringStatus of the task: created, processing, completed, or failed
data.created_atstringISO timestamp of when the request was created (e.g., “2023-04-01T12:34:56.789Z”)
data.errorstringError message (empty if no error occurred)
data.timingsobjectObject containing timing details
data.timings.inferenceintegerInference time in milliseconds

Result Request Parameters

ParameterTypeRequiredDefaultDescription
idstringYes-Task ID

Result Response Parameters

ParameterTypeDescription
codeintegerHTTP status code (e.g., 200 for success)
messagestringStatus message (e.g., “success”)
dataobjectThe prediction data object containing all details
data.idstringUnique identifier for the prediction, the ID of the prediction to get
data.modelstringModel ID used for the prediction
data.outputsstringArray of URLs to the generated content (empty when status is not completed).
data.urlsobjectObject containing related API endpoints
data.urls.getstringURL to retrieve the prediction result
data.statusstringStatus of the task: created, processing, completed, or failed
data.created_atstringISO timestamp of when the request was created (e.g., “2023-04-01T12:34:56.789Z”)
data.errorstringError message (empty if no error occurred)
data.timingsobjectObject containing timing details
data.timings.inferenceintegerInference time in milliseconds
© 2025 WaveSpeedAI. All rights reserved.