Z Ai Glm Image Text To Image
Playground
Try it on WavespeedAI!Z-AI GLM Image generates high-quality images from text prompts, with enhanced understanding of user descriptions, resulting in images that are more precise and personal. Ready-to-use REST inference API, best performance, no cold starts, affordable pricing.
Features
Z.AI GLM-Image Text-to-Image
GLM-Image is Z.AI’s powerful text-to-image generation model built on the GLM architecture. It transforms natural language prompts into high-quality images with strong prompt adherence, flexible sizing, and fast generation speed.
Why Choose This?
-
Strong prompt understanding Accurately interprets detailed prompts to generate images that match your description with high fidelity.
-
Flexible sizing Custom width and height controls allow you to create images for any use case — social media, print, web, or mobile.
-
Prompt Enhancer Built-in tool to automatically improve your prompts for better generation results.
-
Multiple output formats Export as JPEG for smaller file sizes or PNG for lossless quality.
-
Fast generation Optimized for quick turnaround, ideal for rapid ideation and creative iteration.
Parameters
| Parameter | Required | Description |
|---|---|---|
| prompt | Yes | Text description of the image you want to generate |
| width | No | Output width in pixels (default: 1024) |
| height | No | Output height in pixels (default: 1024) |
| seed | No | Random seed for reproducibility (-1 for random) |
| output_format | No | Output format: jpeg (default) or png |
| enable_prompt_expansion | No | Enhance prompt using LLM for better results |
Output Format Options
- jpeg — Smaller file size, good for photos and web use (default)
- png — Lossless quality, supports transparency, best for graphics
Prompt Expansion
When enabled, the model uses an LLM to automatically expand and enhance your prompt for better generation results. This is useful when you have a short or simple prompt and want the model to add more detail.
How to Use
- Write your prompt — describe the image including subject, style, lighting, and mood.
- Set size — adjust width and height for your desired dimensions.
- Set seed — use -1 for random results, or specify a number for reproducibility.
- Choose output format — jpeg for smaller files, png for lossless quality.
- Enable prompt expansion (optional) — check this to let LLM enhance your prompt automatically.
- Run — click Run, preview the result, and iterate if needed.
Pricing
| Item | Cost |
|---|---|
| Per image | $0.12 |
Simple flat-rate pricing regardless of image size or output format.
Best Use Cases
- Social Media Content — Create engaging visuals for posts, stories, and ads.
- Marketing Materials — Generate promotional images and banner graphics.
- Concept Art — Quickly visualize ideas for creative projects.
- Product Visualization — Create mockups and product imagery.
- Presentations — Generate visuals to enhance slides and documents.
Pro Tips
- Be specific in your prompts — include subject, style, lighting, colors, and atmosphere.
- Use the same seed with the same prompt to reproduce identical outputs.
- Start with 1024x1024 for balanced quality, adjust dimensions for specific needs.
- Use JPEG for photos and web content, PNG for graphics with text or transparency needs.
- Enable prompt expansion for short prompts; disable it if you want precise control over the output.
Notes
- Please ensure your prompts comply with content guidelines.
- If an error occurs, review your prompt and try again.
Related Models
- Z.AI CogView-4 — Z.AI’s high-quality text-to-image model with flexible quality modes.
- Qwen Image 2512 — Alibaba’s model with exceptional text rendering capabilities.
- FLUX.2 Pro — Flagship-quality generation with cinematic detail.
Authentication
For authentication details, please refer to the Authentication Guide.
API Endpoints
Submit Task & Query Result
# Submit the task
curl --location --request POST "https://api.wavespeed.ai/api/v3/z-ai/glm-image/text-to-image" \
--header "Content-Type: application/json" \
--header "Authorization: Bearer ${WAVESPEED_API_KEY}" \
--data-raw '{
"size": "1024*1024",
"seed": -1,
"output_format": "jpeg",
"enable_sync_mode": false,
"enable_base64_output": false
}'
# Get the result
curl --location --request GET "https://api.wavespeed.ai/api/v3/predictions/${requestId}/result" \
--header "Authorization: Bearer ${WAVESPEED_API_KEY}"
Parameters
Task Submission Parameters
Request Parameters
| Parameter | Type | Required | Default | Range | Description |
|---|---|---|---|---|---|
| prompt | string | Yes | - | The positive prompt for the generation. | |
| size | string | No | 1024*1024 | 256 ~ 1536 per dimension | The size of the generated media in pixels (width*height). |
| seed | integer | No | -1 | -1 ~ 2147483647 | The random seed to use for the generation. -1 means a random seed will be used. |
| output_format | string | No | jpeg | jpeg, png, webp | The format of the output image. |
| enable_sync_mode | boolean | No | false | - | If set to true, the function will wait for the result to be generated and uploaded before returning the response. This property is only available through the API. |
| enable_base64_output | boolean | No | false | - | If enabled, the output will be encoded into a BASE64 string instead of a URL. This property is only available through the API. |
Response Parameters
| Parameter | Type | Description |
|---|---|---|
| code | integer | HTTP status code (e.g., 200 for success) |
| message | string | Status message (e.g., “success”) |
| data.id | string | Unique identifier for the prediction, Task Id |
| data.model | string | Model ID used for the prediction |
| data.outputs | array | Array of URLs to the generated content (empty when status is not completed) |
| data.urls | object | Object containing related API endpoints |
| data.urls.get | string | URL to retrieve the prediction result |
| data.has_nsfw_contents | array | Array of boolean values indicating NSFW detection for each output |
| data.status | string | Status of the task: created, processing, completed, or failed |
| data.created_at | string | ISO timestamp of when the request was created (e.g., “2023-04-01T12:34:56.789Z”) |
| data.error | string | Error message (empty if no error occurred) |
| data.timings | object | Object containing timing details |
| data.timings.inference | integer | Inference time in milliseconds |
Result Request Parameters
| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
| id | string | Yes | - | Task ID |
Result Response Parameters
| Parameter | Type | Description |
|---|---|---|
| code | integer | HTTP status code (e.g., 200 for success) |
| message | string | Status message (e.g., “success”) |
| data | object | The prediction data object containing all details |
| data.id | string | Unique identifier for the prediction, the ID of the prediction to get |
| data.model | string | Model ID used for the prediction |
| data.outputs | string | Array of URLs to the generated content (empty when status is not completed). |
| data.urls | object | Object containing related API endpoints |
| data.urls.get | string | URL to retrieve the prediction result |
| data.status | string | Status of the task: created, processing, completed, or failed |
| data.created_at | string | ISO timestamp of when the request was created (e.g., “2023-04-01T12:34:56.789Z”) |
| data.error | string | Error message (empty if no error occurred) |
| data.timings | object | Object containing timing details |
| data.timings.inference | integer | Inference time in milliseconds |