Luma Photon Flash
Playground
Try it on WavespeedAI!Luma Photon Flash is a Luma text-to-image model that generates images directly from text prompts. Ready-to-use REST inference API, best performance, no coldstarts, affordable pricing.
Features
Luma Photon Flash
Generate images at incredible speed and unbeatable value with Luma Photon Flash. At just $0.005 per image, this ultra-fast text-to-image model delivers quality results for rapid prototyping, high-volume generation, and budget-conscious creative work.
Looking for higher quality? Try Luma Photon for enhanced output.
Why It Looks Great
- Unbeatable price: Just $0.005 per image — generate 200 images for $1.
- Flash speed: Optimized for the fastest possible generation.
- Atmospheric scenes: Creates mood-driven, emotionally resonant imagery.
- Prompt Enhancer: Built-in tool to refine your descriptions automatically.
- High-volume ready: Perfect for batch generation and rapid iteration.
Parameters
| Parameter | Required | Description |
|---|---|---|
| prompt | Yes | Text description of the image you want to generate. |
How to Use
- Write your prompt — describe the scene, mood, and atmosphere.
- Use Prompt Enhancer (optional) — click to automatically enrich your description.
- Run — click the button to generate.
- Download — preview and save your image.
Pricing
Flat rate per image.
| Output | Cost |
|---|---|
| Per image | $0.005 |
| 100 images | $0.50 |
| 1,000 images | $5.00 |
Best Use Cases
- Rapid Prototyping — Test concepts and ideas at minimal cost.
- High-Volume Generation — Produce large batches affordably.
- Mood Boards — Generate multiple atmospheric images for inspiration.
- Social Media Content — Create visuals for posts and stories quickly.
- Creative Exploration — Experiment freely without budget concerns.
Example Prompts
- “Teen girl journaling in bed by phone flashlight, stuffed animals around her, stickers on the wall, cozy and introspective vibe”
- “Neon-lit ramen shop at night, steam rising, lonely customer at counter”
- “Cat sleeping on a sunny windowsill, dust particles in light, peaceful afternoon”
- “Abandoned amusement park at dusk, nostalgic and slightly eerie atmosphere”
- “Couple sharing headphones on a train, city lights passing by window”
Model Comparison
| Model | Cost | Speed | Best For |
|---|---|---|---|
| Photon Flash | $0.005 | Fastest | High-volume, prototyping, budget work |
| Photon | Higher | Standard | Quality output, final deliverables |
Pro Tips for Best Results
- Include mood and atmosphere: “cozy”, “introspective”, “nostalgic”, “peaceful”.
- Describe lighting conditions: “phone flashlight”, “neon-lit”, “sunny”.
- Add environmental details for richer scenes.
- At $0.005, generate many variations and pick the best.
- Perfect for exploring ideas before committing to higher-quality generation.
- Flash excels at atmospheric, mood-driven scenes.
Notes
- The most affordable text-to-image option available.
- Flash mode prioritizes speed and cost efficiency.
- Ideal for testing prompts before using premium models.
- Processing is optimized for rapid turnaround.
Authentication
For authentication details, please refer to the Authentication Guide.
API Endpoints
Submit Task & Query Result
# Submit the task
curl --location --request POST "https://api.wavespeed.ai/api/v3/luma/photon-flash" \
--header "Content-Type: application/json" \
--header "Authorization: Bearer ${WAVESPEED_API_KEY}" \
--data-raw '{
"enable_base64_output": false
}'
# Get the result
curl --location --request GET "https://api.wavespeed.ai/api/v3/predictions/${requestId}/result" \
--header "Authorization: Bearer ${WAVESPEED_API_KEY}"
Parameters
Task Submission Parameters
Request Parameters
| Parameter | Type | Required | Default | Range | Description |
|---|---|---|---|---|---|
| prompt | string | Yes | - | The positive prompt for the generation. | |
| enable_base64_output | boolean | No | false | - | If enabled, the output will be encoded into a BASE64 string instead of a URL. This property is only available through the API. |
Response Parameters
| Parameter | Type | Description |
|---|---|---|
| code | integer | HTTP status code (e.g., 200 for success) |
| message | string | Status message (e.g., “success”) |
| data.id | string | Unique identifier for the prediction, Task Id |
| data.model | string | Model ID used for the prediction |
| data.outputs | array | Array of URLs to the generated content (empty when status is not completed) |
| data.urls | object | Object containing related API endpoints |
| data.urls.get | string | URL to retrieve the prediction result |
| data.has_nsfw_contents | array | Array of boolean values indicating NSFW detection for each output |
| data.status | string | Status of the task: created, processing, completed, or failed |
| data.created_at | string | ISO timestamp of when the request was created (e.g., “2023-04-01T12:34:56.789Z”) |
| data.error | string | Error message (empty if no error occurred) |
| data.timings | object | Object containing timing details |
| data.timings.inference | integer | Inference time in milliseconds |
Result Request Parameters
| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
| id | string | Yes | - | Task ID |
Result Response Parameters
| Parameter | Type | Description |
|---|---|---|
| code | integer | HTTP status code (e.g., 200 for success) |
| message | string | Status message (e.g., “success”) |
| data | object | The prediction data object containing all details |
| data.id | string | Unique identifier for the prediction, the ID of the prediction to get |
| data.model | string | Model ID used for the prediction |
| data.outputs | string | Array of URLs to the generated content (empty when status is not completed). |
| data.urls | object | Object containing related API endpoints |
| data.urls.get | string | URL to retrieve the prediction result |
| data.status | string | Status of the task: created, processing, completed, or failed |
| data.created_at | string | ISO timestamp of when the request was created (e.g., “2023-04-01T12:34:56.789Z”) |
| data.error | string | Error message (empty if no error occurred) |
| data.timings | object | Object containing timing details |
| data.timings.inference | integer | Inference time in milliseconds |