Uno AI transforms input images into new visuals guided by text prompts, blending reference images with your creative directions for precise, style-aware edits. Ready-to-use REST inference API, best performance, no coldstarts, affordable pricing.
Inattivo

$0.05per esecuzione·~20 / $1









UNO is a subject-driven image generation model from Research. It takes a small set of reference images plus a text prompt and synthesizes new scenes where the same subjects re-appear with high identity consistency and strong style control. It works for both single-subject and multi-subject prompts.
Subject-consistent generation Keep the same person, character, or product recognizable across new scenes and poses.
Single → multi-subject scenes Start from one subject or combine several references into a coherent group image.
Layout & style control Use the prompt and image_size to steer framing, setting, and visual mood while preserving identity.
Flexible aspect ratios Supports portrait, landscape, and square formats suitable for thumbnails, posts, key art, and ads.
1–5 reference images of your subject(s). These define identity, clothing, and overall look.
Text description of the scene you want to generate, for example:
UNO will combine the prompt with your references to place the subjects into the requested scene.
Controls aspect ratio and framing:
Choose based on where the image will be used (feed post, story, banner, thumbnail, etc.).
Randomness control:
Number of images to generate per run (e.g., 1–4). Higher values give more options at once.
Number of diffusion steps (e.g., around 20–30 by default):
Classifier-free guidance strength:
File format of the generated images:
jpegpng