Browse ModelsNvidiaNvidia Nemotron 3 Nano Omni Text

Nvidia Nemotron 3 Nano Omni Text

Nvidia Nemotron 3 Nano Omni Text

Playground

Try it on WavespeedAI!

NVIDIA Nemotron 3 Nano Omni is an open, efficient reasoning model for enterprise agentic workflows, built on a 30B A3B hybrid Transformer-Mamba MoE architecture. Ready-to-use REST inference API, best performance, no coldstarts, affordable pricing.

Features

NVIDIA Nemotron-3 Nano Omni Text

NVIDIA Nemotron-3 Nano Omni Text is a lightweight text-generation model for prompt-based language understanding and response generation. Provide an English prompt, and the model can generate answers, summaries, structured outputs, explanations, and other text-based responses with controllable length and sampling behavior.


Why Choose This?

  • Fast text generation Generate responses quickly for chat, automation, summarization, and general language tasks.

  • Flexible response control Adjust max_tokens, temperature, and top_p to balance response length, determinism, and creativity.

  • Optional system steering Use system_prompt to guide tone, structure, formatting, or task behavior for more controlled outputs.

  • Reasoning mode options Choose between no_think and think depending on your preferred response mode and workflow.

  • Production-ready API Suitable for assistants, content tools, automation pipelines, internal workflows, and structured text generation tasks.


Parameters

ParameterRequiredDescription
promptYesEnglish text prompt sent to the model.
system_promptNoOptional system prompt used to steer behavior, tone, or response style.
reasoning_modeNoReasoning mode: no_think (default) or think.
max_tokensNoMaximum number of tokens to generate. Default: 1024.
temperatureNoSampling temperature. Lower values are more deterministic. Default: 0.7.
top_pNoNucleus sampling probability mass. Default: 0.95.

How to Use

  1. Write your prompt — describe the task, question, or output you want the model to generate.
  2. Add a system prompt (optional) — guide the model’s role, format, or tone.
  3. Choose reasoning mode (optional) — use no_think or think depending on your workflow.
  4. Set generation controls (optional) — adjust max_tokens, temperature, and top_p.
  5. Submit — run the model and review the generated response.

Example Prompt

Summarize the following product requirements into a concise executive brief with key goals, risks, and next steps.


Pricing

Billed by configured max_tokens.

Max TokensCost
1000$0.006
1024$0.0061
2000$0.012
4000$0.024
8000$0.048

Billing Rules

  • Pricing is based on the configured max_tokens value.
  • Cost is $0.006 per 1,000 max tokens.
  • Increasing max_tokens increases cost linearly.
  • prompt, system_prompt, reasoning_mode, temperature, and top_p do not change pricing directly.

Best Use Cases

  • Question answering — Generate direct answers to prompts and tasks.
  • Summarization — Condense long text into concise takeaways or structured briefs.
  • Content drafting — Produce outlines, rewrites, explanations, and short-form written content.
  • Structured generation — Generate bullet points, labeled sections, or formatted outputs with system guidance.
  • Internal automation — Support workflow tools, copilots, and prompt-driven backend tasks.
  • General language tasks — Handle classification, transformation, extraction, and text reasoning workflows.

Pro Tips

  • Write prompts in English for best compatibility.
  • Be explicit about the desired output format, such as summary, bullets, JSON-style structure, or step-by-step explanation.
  • Use system_prompt when you need consistent tone, role behavior, or formatting rules.
  • Keep temperature lower when you want more stable and deterministic results.
  • Increase max_tokens only when you need longer outputs, since pricing is tied to that value.
  • Use top_p and temperature carefully together to balance creativity and control.

Notes

  • prompt is the only required field.
  • prompt must be written in English.
  • Default settings include reasoning_mode = no_think, max_tokens = 1024, temperature = 0.7, and top_p = 0.95.
  • Pricing depends on configured max_tokens, not on other generation settings.

Authentication

For authentication details, please refer to the Authentication Guide.

API Endpoints

Submit Task & Query Result


# Submit the task
curl --location --request POST "https://api.wavespeed.ai/api/v3/nvidia/nemotron-3-nano-omni/text" \
--header "Content-Type: application/json" \
--header "Authorization: Bearer ${WAVESPEED_API_KEY}" \
--data-raw '{
    "reasoning_mode": "no_think",
    "max_tokens": 1024,
    "temperature": 0.7,
    "top_p": 0.95,
    "enable_sync_mode": false
}'

# Get the result
curl --location --request GET "https://api.wavespeed.ai/api/v3/predictions/${requestId}/result" \
--header "Authorization: Bearer ${WAVESPEED_API_KEY}"

Parameters

Task Submission Parameters

Request Parameters

ParameterTypeRequiredDefaultRangeDescription
promptstringYes-Text prompt to send to the model. English only.
system_promptstringNo--Optional system prompt to steer the model.
reasoning_modestringNono_thinkno_think, thinkWhether the model should emit an explicit reasoning trace.
max_tokensintegerNo1024-Maximum number of tokens to generate.
temperaturenumberNo0.7-Sampling temperature. Lower values are more deterministic.
top_pnumberNo0.950 ~ 1Nucleus sampling probability mass.
enable_sync_modebooleanNofalse-If set to true, the function will wait for the result to be generated and uploaded before returning the response. It allows you to get the result directly in the response. This property is only available through the API.

Response Parameters

ParameterTypeDescription
codeintegerHTTP status code (e.g., 200 for success)
messagestringStatus message (e.g., “success”)
data.idstringUnique identifier for the prediction, Task Id
data.modelstringModel ID used for the prediction
data.outputsarrayArray of URLs to the generated content (empty when status is not completed)
data.urlsobjectObject containing related API endpoints
data.urls.getstringURL to retrieve the prediction result
data.has_nsfw_contentsarrayArray of boolean values indicating NSFW detection for each output
data.statusstringStatus of the task: created, processing, completed, or failed
data.created_atstringISO timestamp of when the request was created (e.g., “2023-04-01T12:34:56.789Z”)
data.errorstringError message (empty if no error occurred)
data.timingsobjectObject containing timing details
data.timings.inferenceintegerInference time in milliseconds

Result Request Parameters

ParameterTypeRequiredDefaultDescription
idstringYes-Task ID

Result Response Parameters

ParameterTypeDescription
codeintegerHTTP status code (e.g., 200 for success)
messagestringStatus message (e.g., “success”)
dataobjectThe prediction data object containing all details
data.idstringUnique identifier for the prediction, the ID of the prediction to get
data.modelstringModel ID used for the prediction
data.outputsstringArray of URLs to the generated content (empty when status is not completed).
data.urlsobjectObject containing related API endpoints
data.urls.getstringURL to retrieve the prediction result
data.statusstringStatus of the task: created, processing, completed, or failed
data.created_atstringISO timestamp of when the request was created (e.g., “2023-04-01T12:34:56.789Z”)
data.errorstringError message (empty if no error occurred)
data.timingsobjectObject containing timing details
data.timings.inferenceintegerInference time in milliseconds
© 2025 WaveSpeedAI. All rights reserved.