Flux 2 Flash Text To Image
Playground
Try it on WavespeedAI!FLUX 2 flash is a fast, production-grade AI text-to-image generator from Black Forest Labs, optimized for realistic renders and crisp, typo-free text. It supports prompt-faithful styles and native editing—image-to-image, inpainting/outpainting, background replacement, and quick variants—for posters, logos, product shots, and social ads. Ready-to-use REST API with low latency, no cold starts, and affordable pricing.
Features
WaveSpeedAI FLUX.2 Flash Text-to-Image
FLUX.2 Flash Text-to-Image is a fast, production-focused image generation model designed for high-volume, low-latency workflows. It turns a single text prompt into a ready-to-use image, making it a strong default for rapid iteration, batch pipelines, and “generate lots of options quickly” use cases.
This wrapper is text-to-image (text in, image out). If you need prompt-based edits to existing images (image-to-image, inpainting/outpainting, background replacement, etc.), use the related FLUX.2 Flash Edit model instead.
Key capabilities
- Fast, low-latency image generation for rapid iteration and large-scale creative production.
- Prompt-faithful composition and styles that hold up well across repeated runs and automation pipelines.
- Crisp on-image text tuned for cleaner typography (helpful for posters, product shots, and social creatives).
- Flexible output sizing via
sizeor explicitwidth/heightfor banners, posters, square assets, and more. - Reproducible variations with
seedfor controlled exploration and reruns. - API-friendly output controls like synchronous waiting (
enable_sync_mode) and base64 responses (enable_base64_output) for easier server-side integration.
Parameters and how to use
prompt: (required) The text instruction that describes what you want to generate.size: A shorthand output size string (commonly formatted likeWIDTH*HEIGHT).seed: Randomness control for reproducibility (-1for a new random result each run).enable_sync_mode: Iftrue, wait for generation/upload and return the result directly (API only).enable_base64_output: Iftrue, return base64 output instead of a URL (API only).
Prompt
Write your prompt like you’re briefing a photographer or designer:
-
Start with the subject + setting, then add style, camera/lighting, and details that matter (materials, mood, composition).
-
For marketing/product visuals, include: background type, surface/reflection, lighting direction, and “clean” constraints (e.g., “no extra objects, no watermark”).
-
For on-image text, keep it short and explicit:
- Put the exact text in quotes.
- Specify placement and typography cues (e.g., “centered headline, bold sans-serif, high contrast”).
- If you get typos, simplify the layout and reduce the amount of text.
Other parameters
-
sizeUse when you want a quick preset-like size string. Across WaveSpeedAI FLUX endpoints, this is commonly written asWIDTH*HEIGHT(for example,1024*1024). If you setwidthandheight, keep them consistent withsizeto avoid ambiguity. -
width/heightUse when you need exact dimensions (e.g., wide banners vs. tall posters). -
seed- Use
-1for a fresh random result each run. - Use a fixed integer (e.g.,
12345) to reproduce a composition or generate controlled variations while you iterate on the prompt.
- Use
-
enable_sync_mode(API only) Set totruewhen you want the call to wait and return the generated result in the same response (useful for simple backends and demos). -
enable_base64_output(API only) Set totruewhen you want the output encoded as base64 instead of a hosted URL (useful for storage, pipelines, or environments that can’t fetch URLs).
After you finish configuring the parameters, click Run, preview the result, and iterate if needed.
Pricing
$0.008 per run
Notes
- If you’re generating many variations, keep
size(orwidth/height) and a fixedseedconsistent while you tune the prompt—this makes changes easier to compare.
Related Models
- FLUX.2 Flash Edit – Prompt-based image editing (image-to-image) when you need to modify existing assets instead of generating from scratch.
- FLUX.2 Dev Text-to-Image – Leaner base model in the FLUX.2 family for fast iteration and training-friendly workflows.
- FLUX.2 Flex Text-to-Image – More style-forward generation when you want broader aesthetics while staying fast.
- FLUX.2 Pro Text-to-Image – Higher-capacity option for more demanding production visuals and higher fidelity outputs.
Authentication
For authentication details, please refer to the Authentication Guide.
API Endpoints
Submit Task & Query Result
# Submit the task
curl --location --request POST "https://api.wavespeed.ai/api/v3/wavespeed-ai/flux-2-flash/text-to-image" \
--header "Content-Type: application/json" \
--header "Authorization: Bearer ${WAVESPEED_API_KEY}" \
--data-raw '{
"size": "1024*1024",
"seed": -1,
"enable_sync_mode": false,
"enable_base64_output": false
}'
# Get the result
curl --location --request GET "https://api.wavespeed.ai/api/v3/predictions/${requestId}/result" \
--header "Authorization: Bearer ${WAVESPEED_API_KEY}"
Parameters
Task Submission Parameters
Request Parameters
| Parameter | Type | Required | Default | Range | Description |
|---|---|---|---|---|---|
| prompt | string | Yes | - | The positive prompt for the generation. | |
| size | string | No | 1024*1024 | 256 ~ 1536 per dimension | The size of the generated media in pixels (width*height). |
| seed | integer | No | -1 | -1 ~ 2147483647 | The random seed to use for the generation. -1 means a random seed will be used. |
| enable_sync_mode | boolean | No | false | - | If set to true, the function will wait for the result to be generated and uploaded before returning the response. It allows you to get the result directly in the response. This property is only available through the API. |
| enable_base64_output | boolean | No | false | - | If enabled, the output will be encoded into a BASE64 string instead of a URL. This property is only available through the API. |
Response Parameters
| Parameter | Type | Description |
|---|---|---|
| code | integer | HTTP status code (e.g., 200 for success) |
| message | string | Status message (e.g., “success”) |
| data.id | string | Unique identifier for the prediction, Task Id |
| data.model | string | Model ID used for the prediction |
| data.outputs | array | Array of URLs to the generated content (empty when status is not completed) |
| data.urls | object | Object containing related API endpoints |
| data.urls.get | string | URL to retrieve the prediction result |
| data.has_nsfw_contents | array | Array of boolean values indicating NSFW detection for each output |
| data.status | string | Status of the task: created, processing, completed, or failed |
| data.created_at | string | ISO timestamp of when the request was created (e.g., “2023-04-01T12:34:56.789Z”) |
| data.error | string | Error message (empty if no error occurred) |
| data.timings | object | Object containing timing details |
| data.timings.inference | integer | Inference time in milliseconds |
Result Request Parameters
| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
| id | string | Yes | - | Task ID |
Result Response Parameters
| Parameter | Type | Description |
|---|---|---|
| code | integer | HTTP status code (e.g., 200 for success) |
| message | string | Status message (e.g., “success”) |
| data | object | The prediction data object containing all details |
| data.id | string | Unique identifier for the prediction, the ID of the prediction to get |
| data.model | string | Model ID used for the prediction |
| data.outputs | string | Array of URLs to the generated content (empty when status is not completed). |
| data.urls | object | Object containing related API endpoints |
| data.urls.get | string | URL to retrieve the prediction result |
| data.status | string | Status of the task: created, processing, completed, or failed |
| data.created_at | string | ISO timestamp of when the request was created (e.g., “2023-04-01T12:34:56.789Z”) |
| data.error | string | Error message (empty if no error occurred) |
| data.timings | object | Object containing timing details |
| data.timings.inference | integer | Inference time in milliseconds |