Nvidia Nemotron 3 Nano Omni Vision
Playground
Try it on WavespeedAI!NVIDIA Nemotron 3 Nano Omni is an open, efficient reasoning model for enterprise agentic workflows, built on a 30B A3B hybrid Transformer-Mamba MoE architecture. Ready-to-use REST inference API, best performance, no coldstarts, affordable pricing.
Features
NVIDIA Nemotron-3 Nano Omni Vision
NVIDIA Nemotron-3 Nano Omni Vision is a multimodal vision-language model for image understanding and analysis. Upload an image, provide an English prompt, and the model generates a text response for tasks such as image description, visual question answering, scene understanding, and structured visual analysis.
Why Choose This?
-
Image understanding with natural-language prompts Ask questions about an image or request a description in plain English.
-
Flexible response control Adjust
max_tokens,temperature, andtop_pto balance response length, determinism, and creativity. -
Optional system steering Use
system_promptto guide output style, role, or response constraints for more controlled behavior. -
Reasoning mode options Choose between
no_thinkandthinkdepending on your preferred response mode. -
Production-ready API Suitable for image analysis workflows, multimodal assistants, automated review pipelines, and visual understanding tools.
Parameters
| Parameter | Required | Description |
|---|---|---|
| prompt | Yes | English text prompt sent to the model. |
| image | Yes | Image URL to analyze with the model. |
| system_prompt | No | Optional system prompt used to steer behavior, tone, or response style. |
| reasoning_mode | No | Reasoning mode: no_think (default) or think. |
| max_tokens | No | Maximum number of tokens to generate. Default: 1024. |
| temperature | No | Sampling temperature. Lower values are more deterministic. Default: 0.7. |
| top_p | No | Nucleus sampling probability mass. Default: 0.95. |
How to Use
- Upload or link your image — provide the image you want the model to analyze.
- Write your prompt — ask the model to describe, explain, compare, classify, or answer questions about the image.
- Add a system prompt (optional) — guide the response style, output format, or task framing.
- Choose reasoning mode (optional) — use
no_thinkorthinkdepending on your workflow. - Set generation controls (optional) — adjust
max_tokens,temperature, andtop_p. - Submit — run the model and review the generated response.
Example Prompt
Describe this image in detail, including the setting, visible objects, mood, and any notable historical or architectural details.
Pricing
Billed by configured max_tokens.
| Max Tokens | Cost |
|---|---|
| 1000 | $0.006 |
| 1024 | $0.0061 |
| 2000 | $0.012 |
| 4000 | $0.024 |
| 8000 | $0.048 |
Billing Rules
- Pricing is based on the configured
max_tokensvalue. - Cost is $0.006 per 1,000 max tokens.
- Increasing
max_tokensincreases cost linearly. prompt,image,system_prompt,reasoning_mode,temperature, andtop_pdo not change pricing directly.
Best Use Cases
- Image description — Generate clear descriptions of scenes, objects, and visual content.
- Visual question answering — Ask targeted questions about what appears in an image.
- Document and screenshot analysis — Extract meaning from UI screenshots, charts, diagrams, or other visual references.
- Content moderation and review workflows — Use text prompts to guide structured inspection of uploaded images.
- Multimodal assistants — Add image-aware understanding to support bots, tools, and internal workflows.
- Research and annotation tasks — Use guided prompts to summarize or analyze visual inputs consistently.
Pro Tips
- Write prompts in English for best compatibility.
- Be specific about what you want, such as description, object listing, comparison, or focused analysis.
- Use
system_promptwhen you need a consistent format, such as bullet summaries, JSON-style output, or domain-specific tone. - Keep
temperaturelower when you want more stable and deterministic responses. - Increase
max_tokensonly when you need longer outputs, since pricing is tied to that value. - Use
top_pandtemperaturetogether carefully to balance diversity and control.
Notes
- Both
promptandimageare required. promptis English only.- Default settings include
reasoning_mode = no_think,max_tokens = 1024,temperature = 0.7, andtop_p = 0.95. - Pricing depends on configured
max_tokens, not on other generation settings.
Related Models
- NVIDIA Nemotron-3 Nano Omni Video — Analyze and understand video content with the same Omni model family.
- NVIDIA Nemotron-3 Nano Omni Audio — Process and understand audio inputs for multimodal workflows.
- NVIDIA Nemotron-3 Nano Omni Text — Use the text-only variant for language generation and text-based tasks.
Authentication
For authentication details, please refer to the Authentication Guide.
API Endpoints
Submit Task & Query Result
# Submit the task
curl --location --request POST "https://api.wavespeed.ai/api/v3/nvidia/nemotron-3-nano-omni/vision" \
--header "Content-Type: application/json" \
--header "Authorization: Bearer ${WAVESPEED_API_KEY}" \
--data-raw '{
"reasoning_mode": "no_think",
"max_tokens": 1024,
"temperature": 0.7,
"top_p": 0.95,
"enable_sync_mode": false
}'
# Get the result
curl --location --request GET "https://api.wavespeed.ai/api/v3/predictions/${requestId}/result" \
--header "Authorization: Bearer ${WAVESPEED_API_KEY}"
Parameters
Task Submission Parameters
Request Parameters
| Parameter | Type | Required | Default | Range | Description |
|---|---|---|---|---|---|
| prompt | string | Yes | - | Text prompt to send to the model. English only. | |
| image | string | Yes | - | Image URL to analyze with the model. | |
| system_prompt | string | No | - | - | Optional system prompt to steer the model. |
| reasoning_mode | string | No | no_think | no_think, think | Whether the model should emit an explicit reasoning trace. |
| max_tokens | integer | No | 1024 | - | Maximum number of tokens to generate. |
| temperature | number | No | 0.7 | - | Sampling temperature. Lower values are more deterministic. |
| top_p | number | No | 0.95 | 0 ~ 1 | Nucleus sampling probability mass. |
| enable_sync_mode | boolean | No | false | - | If set to true, the function will wait for the result to be generated and uploaded before returning the response. It allows you to get the result directly in the response. This property is only available through the API. |
Response Parameters
| Parameter | Type | Description |
|---|---|---|
| code | integer | HTTP status code (e.g., 200 for success) |
| message | string | Status message (e.g., “success”) |
| data.id | string | Unique identifier for the prediction, Task Id |
| data.model | string | Model ID used for the prediction |
| data.outputs | array | Array of URLs to the generated content (empty when status is not completed) |
| data.urls | object | Object containing related API endpoints |
| data.urls.get | string | URL to retrieve the prediction result |
| data.has_nsfw_contents | array | Array of boolean values indicating NSFW detection for each output |
| data.status | string | Status of the task: created, processing, completed, or failed |
| data.created_at | string | ISO timestamp of when the request was created (e.g., “2023-04-01T12:34:56.789Z”) |
| data.error | string | Error message (empty if no error occurred) |
| data.timings | object | Object containing timing details |
| data.timings.inference | integer | Inference time in milliseconds |
Result Request Parameters
| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
| id | string | Yes | - | Task ID |
Result Response Parameters
| Parameter | Type | Description |
|---|---|---|
| code | integer | HTTP status code (e.g., 200 for success) |
| message | string | Status message (e.g., “success”) |
| data | object | The prediction data object containing all details |
| data.id | string | Unique identifier for the prediction, the ID of the prediction to get |
| data.model | string | Model ID used for the prediction |
| data.outputs | string | Array of URLs to the generated content (empty when status is not completed). |
| data.urls | object | Object containing related API endpoints |
| data.urls.get | string | URL to retrieve the prediction result |
| data.status | string | Status of the task: created, processing, completed, or failed |
| data.created_at | string | ISO timestamp of when the request was created (e.g., “2023-04-01T12:34:56.789Z”) |
| data.error | string | Error message (empty if no error occurred) |
| data.timings | object | Object containing timing details |
| data.timings.inference | integer | Inference time in milliseconds |