Nvidia Nemotron 3 Nano Omni Audio
Playground
Try it on WavespeedAI!NVIDIA Nemotron 3 Nano Omni is an open, efficient reasoning model for enterprise agentic workflows, built on a 30B A3B hybrid Transformer-Mamba MoE architecture. Ready-to-use REST inference API, best performance, no coldstarts, affordable pricing.
Features
NVIDIA Nemotron-3 Nano Omni Audio
NVIDIA Nemotron-3 Nano Omni Audio is a multimodal audio-language model for understanding and analyzing audio content. Provide an audio URL and an English prompt, and the model generates a text response for tasks such as audio description, spoken-content understanding, sound event analysis, and structured audio question answering.
Why Choose This?
-
Audio understanding with natural-language prompts Ask questions about an audio clip or request summaries, descriptions, and structured analysis in plain English.
-
Broad audio reasoning Analyze spoken content, sound events, acoustic context, and overall scene characteristics from uploaded audio.
-
Flexible response control Adjust
max_tokens,temperature, andtop_pto balance response length, determinism, and creativity. -
Optional system steering Use
system_promptto guide output style, structure, or task behavior for more controlled results. -
Reasoning mode options Choose between
no_thinkandthinkdepending on your preferred response mode and workflow. -
Production-ready API Suitable for audio analysis pipelines, multimodal assistants, content review systems, and automated media understanding workflows.
Parameters
| Parameter | Required | Description |
|---|---|---|
| prompt | Yes | English text prompt sent to the model. |
| audio_url | Yes | URL of the audio to analyze. |
| system_prompt | No | Optional system prompt used to steer behavior, tone, or response style. |
| reasoning_mode | No | Reasoning mode: no_think (default) or think. |
| max_tokens | No | Maximum number of tokens to generate. Default: 1024. |
| temperature | No | Sampling temperature. Lower values are more deterministic. Default: 0.7. |
| top_p | No | Nucleus sampling probability mass. Default: 0.95. |
How to Use
- Provide your audio URL — upload or link the audio you want the model to analyze.
- Write your prompt — ask the model to describe, summarize, explain, classify, or answer questions about the audio.
- Add a system prompt (optional) — guide the response style, structure, or task framing.
- Choose reasoning mode (optional) — use
no_thinkorthinkdepending on your workflow. - Set generation controls (optional) — adjust
max_tokens,temperature, andtop_p. - Submit — run the model and review the generated response.
Example Prompt
Describe this audio in detail, including the type of sounds present, any spoken content, the overall environment, and the likely context of the recording.
Pricing
Billed by configured max_tokens.
| Max Tokens | Cost |
|---|---|
| 1000 | $0.006 |
| 1024 | $0.0061 |
| 2000 | $0.012 |
| 4000 | $0.024 |
| 8000 | $0.048 |
Billing Rules
- Pricing is based on the configured
max_tokensvalue. - Cost is $0.006 per 1,000 max tokens.
- Increasing
max_tokensincreases cost linearly. prompt,audio_url,system_prompt,reasoning_mode,temperature, andtop_pdo not change pricing directly.
Best Use Cases
- Audio summarization — Generate concise or detailed summaries of spoken or environmental audio.
- Sound event analysis — Identify notable sounds, acoustic cues, and scene characteristics.
- Audio question answering — Ask targeted questions about what can be heard in an audio clip.
- Speech and dialogue understanding — Analyze spoken content, conversations, or narration in structured ways.
- Content review workflows — Inspect uploaded audio for categorization, moderation, or review tasks.
- Multimodal assistants — Add audio-aware understanding to internal tools, bots, and applications.
Pro Tips
- Write prompts in English for best compatibility.
- Be specific about the task, such as summarization, sound identification, transcript-style understanding, or focused question answering.
- Use
system_promptwhen you need a consistent output format, such as bullet summaries, labeled sections, or structured JSON-like responses. - Keep
temperaturelower when you want more stable and deterministic answers. - Increase
max_tokensonly when you need longer outputs, since pricing is tied to that value. - Ask focused questions like “what sounds are present,” “what is being said,” or “what is the likely recording environment” for clearer results.
Notes
- Both
promptandaudio_urlare required. promptmust be written in English.- Default settings include
reasoning_mode = no_think,max_tokens = 1024,temperature = 0.7, andtop_p = 0.95. - Pricing depends on configured
max_tokens, not on other generation settings.
Related Models
- NVIDIA Nemotron-3 Nano Omni Vision — Analyze images with the same Omni model family.
- NVIDIA Nemotron-3 Nano Omni Video — Analyze video content with temporal understanding.
- NVIDIA Nemotron-3 Nano Omni Text — Use the text-only variant for language generation and text-based tasks.
Authentication
For authentication details, please refer to the Authentication Guide.
API Endpoints
Submit Task & Query Result
# Submit the task
curl --location --request POST "https://api.wavespeed.ai/api/v3/nvidia/nemotron-3-nano-omni/audio" \
--header "Content-Type: application/json" \
--header "Authorization: Bearer ${WAVESPEED_API_KEY}" \
--data-raw '{
"reasoning_mode": "no_think",
"max_tokens": 1024,
"temperature": 0.7,
"top_p": 0.95,
"enable_sync_mode": false
}'
# Get the result
curl --location --request GET "https://api.wavespeed.ai/api/v3/predictions/${requestId}/result" \
--header "Authorization: Bearer ${WAVESPEED_API_KEY}"
Parameters
Task Submission Parameters
Request Parameters
| Parameter | Type | Required | Default | Range | Description |
|---|---|---|---|---|---|
| prompt | string | Yes | - | Text prompt to send to the model. English only. | |
| audio_url | string | Yes | - | - | URL of the audio to reason about. |
| system_prompt | string | No | - | - | Optional system prompt to steer the model. |
| reasoning_mode | string | No | no_think | no_think, think | Whether the model should emit an explicit reasoning trace. |
| max_tokens | integer | No | 1024 | - | Maximum number of tokens to generate. |
| temperature | number | No | 0.7 | - | Sampling temperature. Lower values are more deterministic. |
| top_p | number | No | 0.95 | 0 ~ 1 | Nucleus sampling probability mass. |
| enable_sync_mode | boolean | No | false | - | If set to true, the function will wait for the result to be generated and uploaded before returning the response. It allows you to get the result directly in the response. This property is only available through the API. |
Response Parameters
| Parameter | Type | Description |
|---|---|---|
| code | integer | HTTP status code (e.g., 200 for success) |
| message | string | Status message (e.g., “success”) |
| data.id | string | Unique identifier for the prediction, Task Id |
| data.model | string | Model ID used for the prediction |
| data.outputs | array | Array of URLs to the generated content (empty when status is not completed) |
| data.urls | object | Object containing related API endpoints |
| data.urls.get | string | URL to retrieve the prediction result |
| data.has_nsfw_contents | array | Array of boolean values indicating NSFW detection for each output |
| data.status | string | Status of the task: created, processing, completed, or failed |
| data.created_at | string | ISO timestamp of when the request was created (e.g., “2023-04-01T12:34:56.789Z”) |
| data.error | string | Error message (empty if no error occurred) |
| data.timings | object | Object containing timing details |
| data.timings.inference | integer | Inference time in milliseconds |
Result Request Parameters
| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
| id | string | Yes | - | Task ID |
Result Response Parameters
| Parameter | Type | Description |
|---|---|---|
| code | integer | HTTP status code (e.g., 200 for success) |
| message | string | Status message (e.g., “success”) |
| data | object | The prediction data object containing all details |
| data.id | string | Unique identifier for the prediction, the ID of the prediction to get |
| data.model | string | Model ID used for the prediction |
| data.outputs | string | Array of URLs to the generated content (empty when status is not completed). |
| data.urls | object | Object containing related API endpoints |
| data.urls.get | string | URL to retrieve the prediction result |
| data.status | string | Status of the task: created, processing, completed, or failed |
| data.created_at | string | ISO timestamp of when the request was created (e.g., “2023-04-01T12:34:56.789Z”) |
| data.error | string | Error message (empty if no error occurred) |
| data.timings | object | Object containing timing details |
| data.timings.inference | integer | Inference time in milliseconds |