Seedance 2.0 | Special Offer ✦ 10% OFF NOW | Ends May 13 (UTC+0)

NVIDIA Nemotron 3 Nano Omni Video

nvidia /

NVIDIA Nemotron 3 Nano Omni is an open, efficient reasoning model for enterprise agentic workflows, built on a 30B A3B hybrid Transformer-Mamba MoE architecture. Ready-to-use REST inference API, best performance, no coldstarts, affordable pricing.

video-to-text
Wejście

Przeciągnij i upuść lub kliknij, aby przesłać

If set to true, the function will wait for the result to be generated and uploaded before returning the response. It allows you to get the result directly in the response. This property is only available through the API.

Bezczynny

{
  "output": "A woman with wavy brown hair, wearing a white sweater and blue denim with a belt, is holding a black cassette in her right hand and looking at it. She is in a room with two cream-colored couches on either side, a brown table in the middle, and a lamp on it, along with two photo frames. Behind her, there are three windows with white curtains and brown drapes on the sides."
}

$0.006za uruchomienie·~166 / $1

PrzykładyZobacz wszystkie

Powiązane modele

README

NVIDIA Nemotron-3 Nano Omni Video

NVIDIA Nemotron-3 Nano Omni Video is a multimodal video-language model for understanding and analyzing video content. Provide a video URL and an English prompt, and the model generates a text response for tasks such as video description, scene understanding, event summarization, and visual question answering over time-based media.

Why Choose This?

  • Video understanding with natural-language prompts Ask questions about a video or request summaries, descriptions, and structured analysis in plain English.

  • Temporal scene analysis Understand actions, events, transitions, and visual context across time instead of from a single frame only.

  • Flexible response control Adjust max_tokens, temperature, and top_p to balance response length, determinism, and creativity.

  • Optional system steering Use system_prompt to guide output style, response format, or task behavior for more controlled results.

  • Reasoning mode options Choose between no_think and think depending on your preferred response mode and workflow.

  • Production-ready API Suitable for video analysis pipelines, multimodal assistants, content review systems, and automated media understanding workflows.

Parameters

ParameterRequiredDescription
promptYesEnglish text prompt sent to the model.
video_urlYesURL of the video to analyze.
system_promptNoOptional system prompt used to steer behavior, tone, or response style.
reasoning_modeNoReasoning mode: no_think (default) or think.
max_tokensNoMaximum number of tokens to generate. Default: 1024.
temperatureNoSampling temperature. Lower values are more deterministic. Default: 0.7.
top_pNoNucleus sampling probability mass. Default: 0.95.

How to Use

  1. Provide your video URL — upload or link the video you want the model to analyze.
  2. Write your prompt — ask the model to describe, summarize, explain, compare, classify, or answer questions about the video.
  3. Add a system prompt (optional) — guide the output style, response structure, or task framing.
  4. Choose reasoning mode (optional) — use no_think or think depending on your workflow.
  5. Set generation controls (optional) — adjust max_tokens, temperature, and top_p.
  6. Submit — run the model and review the generated response.

Example Prompt

Describe this video in detail, including the setting, key actions, important scene changes, visible subjects, and the overall mood.

Pricing

Billed by configured max_tokens.

Max TokensCost
1000$0.006
1024$0.0061
2000$0.012
4000$0.024
8000$0.048

Billing Rules

  • Pricing is based on the configured max_tokens value.
  • Cost is $0.006 per 1,000 max tokens.
  • Increasing max_tokens increases cost linearly.
  • prompt, video_url, system_prompt, reasoning_mode, temperature, and top_p do not change pricing directly.

Best Use Cases

  • Video summarization — Generate concise or detailed summaries of video content.
  • Scene and event understanding — Identify key actions, transitions, and important moments over time.
  • Video question answering — Ask targeted questions about what happens in the video.
  • Content review workflows — Inspect uploaded videos for structured analysis, moderation, or categorization tasks.
  • Multimodal assistants — Add video-aware understanding to internal tools, bots, and applications.
  • Research and annotation tasks — Use guided prompts to consistently analyze and label video content.

Pro Tips

  • Write prompts in English for best compatibility.
  • Be specific about the task, such as summarization, scene breakdown, action recognition, or focused question answering.
  • Use system_prompt when you need a consistent output format, such as bullet summaries, labeled sections, or structured JSON-like responses.
  • Keep temperature lower when you want more stable and deterministic answers.
  • Increase max_tokens only when you need longer outputs, since pricing is tied to that value.
  • Ask focused temporal questions like “what happens before and after” or “summarize the sequence of events” for better video-specific results.

Notes

  • Both prompt and video_url are required.
  • prompt must be written in English.
  • Default settings include reasoning_mode = no_think, max_tokens = 1024, temperature = 0.7, and top_p = 0.95.
  • Pricing depends on configured max_tokens, not on other generation settings.

Related Models

Dostępność:Ta strona korzysta z modeli AI udostępnianych przez podmioty trzecie.