Nvidia Nemotron 3 Nano Omni Text
Playground
Try it on WavespeedAI!NVIDIA Nemotron 3 Nano Omni is an open, efficient reasoning model for enterprise agentic workflows, built on a 30B A3B hybrid Transformer-Mamba MoE architecture. Ready-to-use REST inference API, best performance, no coldstarts, affordable pricing.
Features
NVIDIA Nemotron-3 Nano Omni Text
NVIDIA Nemotron-3 Nano Omni Text is a lightweight text-generation model for prompt-based language understanding and response generation. Provide an English prompt, and the model can generate answers, summaries, structured outputs, explanations, and other text-based responses with controllable length and sampling behavior.
Why Choose This?
-
Fast text generation Generate responses quickly for chat, automation, summarization, and general language tasks.
-
Flexible response control Adjust
max_tokens,temperature, andtop_pto balance response length, determinism, and creativity. -
Optional system steering Use
system_promptto guide tone, structure, formatting, or task behavior for more controlled outputs. -
Reasoning mode options Choose between
no_thinkandthinkdepending on your preferred response mode and workflow. -
Production-ready API Suitable for assistants, content tools, automation pipelines, internal workflows, and structured text generation tasks.
Parameters
| Parameter | Required | Description |
|---|---|---|
| prompt | Yes | English text prompt sent to the model. |
| system_prompt | No | Optional system prompt used to steer behavior, tone, or response style. |
| reasoning_mode | No | Reasoning mode: no_think (default) or think. |
| max_tokens | No | Maximum number of tokens to generate. Default: 1024. |
| temperature | No | Sampling temperature. Lower values are more deterministic. Default: 0.7. |
| top_p | No | Nucleus sampling probability mass. Default: 0.95. |
How to Use
- Write your prompt — describe the task, question, or output you want the model to generate.
- Add a system prompt (optional) — guide the model’s role, format, or tone.
- Choose reasoning mode (optional) — use
no_thinkorthinkdepending on your workflow. - Set generation controls (optional) — adjust
max_tokens,temperature, andtop_p. - Submit — run the model and review the generated response.
Example Prompt
Summarize the following product requirements into a concise executive brief with key goals, risks, and next steps.
Pricing
Billed by configured max_tokens.
| Max Tokens | Cost |
|---|---|
| 1000 | $0.006 |
| 1024 | $0.0061 |
| 2000 | $0.012 |
| 4000 | $0.024 |
| 8000 | $0.048 |
Billing Rules
- Pricing is based on the configured
max_tokensvalue. - Cost is $0.006 per 1,000 max tokens.
- Increasing
max_tokensincreases cost linearly. prompt,system_prompt,reasoning_mode,temperature, andtop_pdo not change pricing directly.
Best Use Cases
- Question answering — Generate direct answers to prompts and tasks.
- Summarization — Condense long text into concise takeaways or structured briefs.
- Content drafting — Produce outlines, rewrites, explanations, and short-form written content.
- Structured generation — Generate bullet points, labeled sections, or formatted outputs with system guidance.
- Internal automation — Support workflow tools, copilots, and prompt-driven backend tasks.
- General language tasks — Handle classification, transformation, extraction, and text reasoning workflows.
Pro Tips
- Write prompts in English for best compatibility.
- Be explicit about the desired output format, such as summary, bullets, JSON-style structure, or step-by-step explanation.
- Use
system_promptwhen you need consistent tone, role behavior, or formatting rules. - Keep
temperaturelower when you want more stable and deterministic results. - Increase
max_tokensonly when you need longer outputs, since pricing is tied to that value. - Use
top_pandtemperaturecarefully together to balance creativity and control.
Notes
promptis the only required field.promptmust be written in English.- Default settings include
reasoning_mode = no_think,max_tokens = 1024,temperature = 0.7, andtop_p = 0.95. - Pricing depends on configured
max_tokens, not on other generation settings.
Related Models
- NVIDIA Nemotron-3 Nano Omni Vision — Analyze images with the same Omni model family.
- NVIDIA Nemotron-3 Nano Omni Video — Analyze video content with temporal understanding.
- NVIDIA Nemotron-3 Nano Omni Audio — Process and understand audio inputs for multimodal workflows.
Authentication
For authentication details, please refer to the Authentication Guide.
API Endpoints
Submit Task & Query Result
# Submit the task
curl --location --request POST "https://api.wavespeed.ai/api/v3/nvidia/nemotron-3-nano-omni/text" \
--header "Content-Type: application/json" \
--header "Authorization: Bearer ${WAVESPEED_API_KEY}" \
--data-raw '{
"reasoning_mode": "no_think",
"max_tokens": 1024,
"temperature": 0.7,
"top_p": 0.95,
"enable_sync_mode": false
}'
# Get the result
curl --location --request GET "https://api.wavespeed.ai/api/v3/predictions/${requestId}/result" \
--header "Authorization: Bearer ${WAVESPEED_API_KEY}"
Parameters
Task Submission Parameters
Request Parameters
| Parameter | Type | Required | Default | Range | Description |
|---|---|---|---|---|---|
| prompt | string | Yes | - | Text prompt to send to the model. English only. | |
| system_prompt | string | No | - | - | Optional system prompt to steer the model. |
| reasoning_mode | string | No | no_think | no_think, think | Whether the model should emit an explicit reasoning trace. |
| max_tokens | integer | No | 1024 | - | Maximum number of tokens to generate. |
| temperature | number | No | 0.7 | - | Sampling temperature. Lower values are more deterministic. |
| top_p | number | No | 0.95 | 0 ~ 1 | Nucleus sampling probability mass. |
| enable_sync_mode | boolean | No | false | - | If set to true, the function will wait for the result to be generated and uploaded before returning the response. It allows you to get the result directly in the response. This property is only available through the API. |
Response Parameters
| Parameter | Type | Description |
|---|---|---|
| code | integer | HTTP status code (e.g., 200 for success) |
| message | string | Status message (e.g., “success”) |
| data.id | string | Unique identifier for the prediction, Task Id |
| data.model | string | Model ID used for the prediction |
| data.outputs | array | Array of URLs to the generated content (empty when status is not completed) |
| data.urls | object | Object containing related API endpoints |
| data.urls.get | string | URL to retrieve the prediction result |
| data.has_nsfw_contents | array | Array of boolean values indicating NSFW detection for each output |
| data.status | string | Status of the task: created, processing, completed, or failed |
| data.created_at | string | ISO timestamp of when the request was created (e.g., “2023-04-01T12:34:56.789Z”) |
| data.error | string | Error message (empty if no error occurred) |
| data.timings | object | Object containing timing details |
| data.timings.inference | integer | Inference time in milliseconds |
Result Request Parameters
| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
| id | string | Yes | - | Task ID |
Result Response Parameters
| Parameter | Type | Description |
|---|---|---|
| code | integer | HTTP status code (e.g., 200 for success) |
| message | string | Status message (e.g., “success”) |
| data | object | The prediction data object containing all details |
| data.id | string | Unique identifier for the prediction, the ID of the prediction to get |
| data.model | string | Model ID used for the prediction |
| data.outputs | string | Array of URLs to the generated content (empty when status is not completed). |
| data.urls | object | Object containing related API endpoints |
| data.urls.get | string | URL to retrieve the prediction result |
| data.status | string | Status of the task: created, processing, completed, or failed |
| data.created_at | string | ISO timestamp of when the request was created (e.g., “2023-04-01T12:34:56.789Z”) |
| data.error | string | Error message (empty if no error occurred) |
| data.timings | object | Object containing timing details |
| data.timings.inference | integer | Inference time in milliseconds |