Browse ModelsNvidiaNvidia Nemotron 3 Nano Omni Audio

Nvidia Nemotron 3 Nano Omni Audio

Nvidia Nemotron 3 Nano Omni Audio

Playground

Try it on WavespeedAI!

NVIDIA Nemotron 3 Nano Omni is an open, efficient reasoning model for enterprise agentic workflows, built on a 30B A3B hybrid Transformer-Mamba MoE architecture. Ready-to-use REST inference API, best performance, no coldstarts, affordable pricing.

Features

NVIDIA Nemotron-3 Nano Omni Audio

NVIDIA Nemotron-3 Nano Omni Audio is a multimodal audio-language model for understanding and analyzing audio content. Provide an audio URL and an English prompt, and the model generates a text response for tasks such as audio description, spoken-content understanding, sound event analysis, and structured audio question answering.


Why Choose This?

  • Audio understanding with natural-language prompts Ask questions about an audio clip or request summaries, descriptions, and structured analysis in plain English.

  • Broad audio reasoning Analyze spoken content, sound events, acoustic context, and overall scene characteristics from uploaded audio.

  • Flexible response control Adjust max_tokens, temperature, and top_p to balance response length, determinism, and creativity.

  • Optional system steering Use system_prompt to guide output style, structure, or task behavior for more controlled results.

  • Reasoning mode options Choose between no_think and think depending on your preferred response mode and workflow.

  • Production-ready API Suitable for audio analysis pipelines, multimodal assistants, content review systems, and automated media understanding workflows.


Parameters

ParameterRequiredDescription
promptYesEnglish text prompt sent to the model.
audio_urlYesURL of the audio to analyze.
system_promptNoOptional system prompt used to steer behavior, tone, or response style.
reasoning_modeNoReasoning mode: no_think (default) or think.
max_tokensNoMaximum number of tokens to generate. Default: 1024.
temperatureNoSampling temperature. Lower values are more deterministic. Default: 0.7.
top_pNoNucleus sampling probability mass. Default: 0.95.

How to Use

  1. Provide your audio URL — upload or link the audio you want the model to analyze.
  2. Write your prompt — ask the model to describe, summarize, explain, classify, or answer questions about the audio.
  3. Add a system prompt (optional) — guide the response style, structure, or task framing.
  4. Choose reasoning mode (optional) — use no_think or think depending on your workflow.
  5. Set generation controls (optional) — adjust max_tokens, temperature, and top_p.
  6. Submit — run the model and review the generated response.

Example Prompt

Describe this audio in detail, including the type of sounds present, any spoken content, the overall environment, and the likely context of the recording.


Pricing

Billed by configured max_tokens.

Max TokensCost
1000$0.006
1024$0.0061
2000$0.012
4000$0.024
8000$0.048

Billing Rules

  • Pricing is based on the configured max_tokens value.
  • Cost is $0.006 per 1,000 max tokens.
  • Increasing max_tokens increases cost linearly.
  • prompt, audio_url, system_prompt, reasoning_mode, temperature, and top_p do not change pricing directly.

Best Use Cases

  • Audio summarization — Generate concise or detailed summaries of spoken or environmental audio.
  • Sound event analysis — Identify notable sounds, acoustic cues, and scene characteristics.
  • Audio question answering — Ask targeted questions about what can be heard in an audio clip.
  • Speech and dialogue understanding — Analyze spoken content, conversations, or narration in structured ways.
  • Content review workflows — Inspect uploaded audio for categorization, moderation, or review tasks.
  • Multimodal assistants — Add audio-aware understanding to internal tools, bots, and applications.

Pro Tips

  • Write prompts in English for best compatibility.
  • Be specific about the task, such as summarization, sound identification, transcript-style understanding, or focused question answering.
  • Use system_prompt when you need a consistent output format, such as bullet summaries, labeled sections, or structured JSON-like responses.
  • Keep temperature lower when you want more stable and deterministic answers.
  • Increase max_tokens only when you need longer outputs, since pricing is tied to that value.
  • Ask focused questions like “what sounds are present,” “what is being said,” or “what is the likely recording environment” for clearer results.

Notes

  • Both prompt and audio_url are required.
  • prompt must be written in English.
  • Default settings include reasoning_mode = no_think, max_tokens = 1024, temperature = 0.7, and top_p = 0.95.
  • Pricing depends on configured max_tokens, not on other generation settings.

Authentication

For authentication details, please refer to the Authentication Guide.

API Endpoints

Submit Task & Query Result


# Submit the task
curl --location --request POST "https://api.wavespeed.ai/api/v3/nvidia/nemotron-3-nano-omni/audio" \
--header "Content-Type: application/json" \
--header "Authorization: Bearer ${WAVESPEED_API_KEY}" \
--data-raw '{
    "reasoning_mode": "no_think",
    "max_tokens": 1024,
    "temperature": 0.7,
    "top_p": 0.95,
    "enable_sync_mode": false
}'

# Get the result
curl --location --request GET "https://api.wavespeed.ai/api/v3/predictions/${requestId}/result" \
--header "Authorization: Bearer ${WAVESPEED_API_KEY}"

Parameters

Task Submission Parameters

Request Parameters

ParameterTypeRequiredDefaultRangeDescription
promptstringYes-Text prompt to send to the model. English only.
audio_urlstringYes--URL of the audio to reason about.
system_promptstringNo--Optional system prompt to steer the model.
reasoning_modestringNono_thinkno_think, thinkWhether the model should emit an explicit reasoning trace.
max_tokensintegerNo1024-Maximum number of tokens to generate.
temperaturenumberNo0.7-Sampling temperature. Lower values are more deterministic.
top_pnumberNo0.950 ~ 1Nucleus sampling probability mass.
enable_sync_modebooleanNofalse-If set to true, the function will wait for the result to be generated and uploaded before returning the response. It allows you to get the result directly in the response. This property is only available through the API.

Response Parameters

ParameterTypeDescription
codeintegerHTTP status code (e.g., 200 for success)
messagestringStatus message (e.g., “success”)
data.idstringUnique identifier for the prediction, Task Id
data.modelstringModel ID used for the prediction
data.outputsarrayArray of URLs to the generated content (empty when status is not completed)
data.urlsobjectObject containing related API endpoints
data.urls.getstringURL to retrieve the prediction result
data.has_nsfw_contentsarrayArray of boolean values indicating NSFW detection for each output
data.statusstringStatus of the task: created, processing, completed, or failed
data.created_atstringISO timestamp of when the request was created (e.g., “2023-04-01T12:34:56.789Z”)
data.errorstringError message (empty if no error occurred)
data.timingsobjectObject containing timing details
data.timings.inferenceintegerInference time in milliseconds

Result Request Parameters

ParameterTypeRequiredDefaultDescription
idstringYes-Task ID

Result Response Parameters

ParameterTypeDescription
codeintegerHTTP status code (e.g., 200 for success)
messagestringStatus message (e.g., “success”)
dataobjectThe prediction data object containing all details
data.idstringUnique identifier for the prediction, the ID of the prediction to get
data.modelstringModel ID used for the prediction
data.outputsstringArray of URLs to the generated content (empty when status is not completed).
data.urlsobjectObject containing related API endpoints
data.urls.getstringURL to retrieve the prediction result
data.statusstringStatus of the task: created, processing, completed, or failed
data.created_atstringISO timestamp of when the request was created (e.g., “2023-04-01T12:34:56.789Z”)
data.errorstringError message (empty if no error occurred)
data.timingsobjectObject containing timing details
data.timings.inferenceintegerInference time in milliseconds
© 2025 WaveSpeedAI. All rights reserved.