HiDream-O1-Image Dev (Edit)
HiDream-O1-Image Dev is the distilled variant of HiDream-O1-Image. The same single native model handles text-to-image, prompt-driven editing, and subject-driven personalization at up to 2K (≈2048×2048) — but tuned for fast, low-step inference so you pay less and wait less per edit. Provide one or more reference images plus an instruction; the model edits while keeping identity, composition, and lighting stable.
Why Choose This?
- Distilled for speed — Fewer denoising steps than the full HiDream-O1-Image, lower latency per edit.
- Lower cost — Half the price of the full model, ideal for batch editing, e-commerce variations, and rapid iteration.
- Identity-preserving edits — Keeps the subject, pose, and overall composition stable while applying the requested change.
- Multi-reference support — Accepts one image for plain editing, or multiple reference images for subject-driven personalization (consistent characters, products, IPs).
- High-resolution output — Up to ~2048×2048 with sharp detail and natural lighting.
- Strong instruction following — Faithfully interprets edit instructions (wardrobe, scene tweaks, object swaps, style changes, on-image text).
- Multiple output formats — JPEG, PNG, or WEBP.
- Reproducibility — Use a fixed
seed to recreate exact results.
Parameters
| Parameter | Required | Description |
|---|
| prompt | Yes | Text instruction describing the edit you want to make. |
| images | Yes | Reference images for editing or subject-driven personalization. Each entry is a public URL or base64 data URI (PNG, JPEG, JPG, or WebP, up to 50MB, aspect ratio between 1:4 and 4:1). Pass one image for editing, multiple for subject-driven personalization. |
| size | No | Output image size as WIDTH*HEIGHT (default: 2048*2048). The model snaps to the closest supported resolution. |
| output_format | No | Output format: jpeg, png, or webp (default: jpeg). |
| seed | No | Random seed for reproducible generation. |
| enable_sync_mode | No | API only — wait for the result and return it inline. |
| enable_base64_output | No | API only — return the image as a base64 string instead of a URL. |
How to Use
- Upload your reference image(s) — drag and drop or paste publicly accessible URLs. One image for editing, multiple for subject-driven personalization.
- Write your edit instruction — describe both the change and what to keep (e.g., "Change the jacket to black leather. Keep the person, pose, and background unchanged.").
- Choose
size (optional) — keep the source aspect ratio or pick a new one.
- Set
output_format — JPEG for compact files, PNG for lossless, WEBP for balance.
- Set
seed (optional) — pin a number to reproduce the same edit.
- Run — submit the request and download the edited image.
Pricing
Flat rate per edit — half the price of the full HiDream-O1-Image (Edit).
| Output | Cost |
|---|
| Per image edit | $0.02 |
Best Use Cases
- Batch e-commerce variations — Multiple colorways or styling options from a hero shot, at scale.
- Subject-driven personalization — Pass several reference images to keep a character / product / IP consistent across new scenes.
- Wardrobe & accessory edits — Swap clothing color/style, add/remove glasses, hats, jewelry.
- Background & scene tweaks — Replace backgrounds or swap props while keeping the subject pixel-stable.
- Object add / remove / replace — Cleanly modify specific elements without disturbing the rest of the scene.
- Style transfer — Apply a new look while preserving the subject's identity.
- Marketing iteration — Quickly produce many creative variants per source image at low cost.
Dev vs. Full — Which to Pick?
| HiDream-O1-Image Dev (Edit) | HiDream-O1-Image (Edit, Full) |
|---|
| Speed | Faster (distilled, fewer steps) | Standard |
| Cost / edit | $0.02 | $0.04 |
| Quality ceiling | High | Highest |
| Best for | Batches, drafts, low-cost edits | Final hero edits |
Prompting Guide
For clean results, specify both the change and what to preserve:
Template:
Keep the same person, pose, and background. Change [target edit]. Keep lighting natural and consistent.
Examples:
- "Keep the same person and pose. Change the outfit to a light gray sweater and add gold thin-rimmed glasses. Keep the background and lighting unchanged."
- "Keep identity and composition. Replace the jacket with a black leather jacket; keep realistic fabric texture and natural shadows."
- "Keep the subject unchanged. Remove the object on the table and keep the scene lighting consistent."
- "Keep the person and pose. Change the background to a sunlit beach at golden hour."
For subject-driven personalization, pass several reference images of the same subject (different angles / outfits / lighting) and describe the new scene; the model uses them jointly to keep identity consistent.
Pro Tips
- Always state what should stay the same — it dramatically improves identity preservation.
- For multi-reference personalization, give 3–5 varied shots of the subject for best identity locking.
- Reference elements explicitly: "the text on the shirt", "the background", "the person on the left".
- Pin a
seed while iterating so you can compare prompt variants against a fixed baseline.
- For multi-step edits, prefer sequential single-purpose prompts over one long compound instruction.
- Use
png for lossless edits when output will be re-edited downstream.
Notes
- If using URLs for the source images, ensure they are publicly accessible.
- The
enable_base64_output and enable_sync_mode options are only available through the API.
- Higher resolutions may slightly increase processing time.
- Ensure prompts comply with content guidelines.
Related Models
- HiDream-O1-Image Dev · Text-to-Image — Same distilled model, in text-to-image mode.
- HiDream-O1-Image · Edit — The full (non-distilled) edit model for highest fidelity.
- HiDream-O1-Image · Text-to-Image — Full model in text-to-image mode.