Vidu Contest
WaveSpeed.ai
Accueil/Explorer/Flux Image Tools/wavespeed-ai/flux-kontext-dev
image-to-image

image-to-image

Flux Kontext Dev

wavespeed-ai/flux-kontext-dev

FLUX.1 Kontext Dev is an open-weight, open-code image-to-image model that edits images from text prompts for precise, text-guided retouching and style transfer. Ready-to-use REST inference API, best performance, no coldstarts, affordable pricing.

Input

Hint: You can drag and drop a file or click to upload

preview
width
height
1024 × 1024 px
Range: 256 - 1536
If enabled, the output will be encoded into a BASE64 string instead of a URL. This property is only available through the API.
If set to true, the function will wait for the result to be generated and uploaded before returning the response. It allows you to get the result directly in the response. This property is only available through the API.

Idle

Turn pictures into anime style

Votre requête coûtera $0.025 par exécution.

Pour $1 vous pouvez exécuter ce modèle environ 40 fois.

Encore une chose :

ExemplesTout voir

Turn pictures into anime style
Change the car color to red.
Change the background to a dark, eerie haunted castle with gothic architecture, cracked stone walls, flickering lanterns, and dense fog. Adjust the lighting and shadows on the girl so they match the ominous mood of the new background — cool tones, directional lighting from the castle, and subtle atmospheric glow. Keep the girl's pose and expression intact, but ensure overall harmony between subject and environment.
Turn it into a clay style, soft, handcrafted textures, rounded shapes, colorful modeling clay look
Put the Mona Lisa in sunglasses.
Into Ghibli style
Add the text “COOL” to the image.
Change the background to sky
Turning skateboards into surfboards
Turned into anime style
She is wearing a red bikini
Turn pictures into card style.

README

FLUX Kontext Dev — wavespeed-ai/flux-kontext-dev

FLUX.1 Kontext Dev is an open-weight, open-code image-to-image model built for instruction-based editing. Provide a source image plus a natural-language edit request, and the model rewrites the image while preserving the original context when asked—making it suitable for targeted retouching, object changes, background swaps, text edits, and controlled style transforms.

Key capabilities

  • Instruction-based image editing from a single input image
  • Strong subject and scene preservation when you explicitly request it
  • Local and global edits: change specific regions or the whole image
  • Iterative editing workflow: apply multiple edits step-by-step with minimal drift

Typical use cases

  • Retouching: lighting, exposure, cleanup, blemish removal
  • Object edits: add/remove/replace items, change colors/materials
  • Background replacement: swap environments while keeping the subject consistent
  • Text edits: add or replace words on signs, posters, packaging
  • Style transforms: convert to clay, illustration, cinematic, etc., while preserving composition

Pricing

$0.025 per image.

Cost per run = num_images × $0.025 Example: num_images = 4 → $0.10

Inputs and outputs

Input:

  • One source image (upload or public URL)
  • One edit instruction (prompt)

Output:

  • One or more edited images (controlled by num_images)

Parameters

  • prompt: Edit instruction describing what to change and what to keep
  • image: Source image
  • width / height: Output resolution
  • num_inference_steps: More steps usually improves quality but increases latency
  • guidance_scale: Higher values follow the prompt more strongly; too high may over-edit
  • num_images: Number of variations generated per run
  • seed: Fixed value for reproducibility; -1 for random
  • output_format: jpeg or png
  • enable_base64_output: Return BASE64 instead of a URL (API only)
  • enable_sync_mode: Wait for generation and return results directly (API only)

Prompting guide

Write prompts like an editor’s brief:

  1. Preserve clause: what must stay the same
  2. Edit clause: what must change
  3. Constraints: realism level, lighting, placement, typography, materials
  4. Consistency: match shadows/highlights to the new scene

Template: Keep [what must stay]. Change [what to edit]. Ensure [constraints]. Match [lighting/shadows/style consistency].

Example prompts

  • Keep the person’s face, pose, and clothing unchanged. Change the background to a foggy gothic castle. Match lighting and shadows to the new environment.
  • Change the car color to red. Preserve reflections and keep the rest of the scene unchanged.
  • Add the text “COOL” on the sign in the same perspective, with realistic shadows, and do not alter anything else.
  • Turn the image into a clay style with handcrafted texture and soft studio lighting, while keeping the composition and subject identity.
  • Remove the background crowd and keep the main subject sharp and unchanged.

Best practices

  • Start simple, then iterate: do one change per run for maximum control.
  • If the edit is too aggressive, lower guidance_scale and strengthen the preserve clause.
  • For A/B comparisons, keep seed fixed and change only one parameter at a time.
  • Use aspect-matched width/height to avoid unintended stretching.