We’re excited to announce day-0 help for NVIDIA Nemotron 3 Nano Omni on Clarifai. Obtainable now on Clarifai Reasoning Engine, Nano Omni brings quick multimodal reasoning to builders constructing agentic techniques, delivering throughput of 400+ tokens per second.
NVIDIA Nemotron 3 Nano Omni is a 30B A3B multimodal reasoning mannequin constructed for workloads that span paperwork, pictures, video, and audio. With a 256K context window and help for textual content, picture, video, and audio inputs with textual content output, it offers builders a single mannequin for dealing with wealthy multimodal context inside agentic workflows.
That makes it a robust match for sub-agents in workflows the place multimodal understanding and velocity must go collectively.
A Multimodal Mannequin for Specialised Sub-Brokers
As agent techniques develop extra succesful, additionally they turn into extra specialised. Completely different fashions and elements tackle planning, execution, retrieval, and verification, every working inside a broader workflow. In that structure, the mannequin dealing with multimodal inputs has to do greater than course of remoted inputs. It has to interpret a number of modalities collectively, protect context throughout steps, and reply quick sufficient to remain throughout the operational loop.
As a light-weight multimodal mannequin for sub-agents, Nemotron 3 Nano Omni can purpose throughout screens, paperwork, charts, audio, and video with out routing every modality by way of a separate stack. Moderately than splitting imaginative and prescient, speech, and language throughout a number of fashions, it offers builders a extra unified solution to deal with multimodal reasoning whereas holding the general system simpler to handle.
Constructed for Pc Use, Paperwork, and Audio-Video Reasoning
Nano Omni is very related for the sorts of workloads which can be turning into central to enterprise agentic techniques.
For pc use, brokers must learn interfaces, monitor UI state over time, and confirm whether or not actions accomplished as anticipated. For doc intelligence, they should purpose throughout textual content, tables, charts, screenshots, scanned pages, and combined visible construction in the identical cross. For audio and video workflows, they should join what was stated, what was proven, and what modified over time.
These are all circumstances the place multimodal functionality has to work reliably in manufacturing, with a mannequin that may deal with a number of modalities effectively with out splitting the workflow throughout separate fashions.
The mannequin represents a big soar in functionality from earlier fashions within the Nemotron household. Vital enchancment in benchmarks like OCRBenchV2, OCR_Reasoning, MathVista_MINI and OSWorld replicate the mannequin’s improved efficiency for the true world workloads right now’s brokers are prone to serve.

That’s the place Nano Omni matches naturally, giving builders a single multimodal reasoning stream for the duties sub-agents are more and more anticipated to deal with.
Agent-Pleasant Tokenomics
In agent techniques, sub-agents tackle recurring duties throughout paperwork, screens, audio, and video inside a bigger workflow. Every invocation provides to the price, throughput, and infrastructure calls for of the general system. NVIDIA Nemotron 3 Nano Omni consolidates imaginative and prescient, speech, and language right into a single multimodal mannequin, decreasing inference hops, orchestration logic, and cross-model synchronization in contrast with separate notion stacks.
Nano Omni delivers roughly 2x greater throughput on common, together with about 2.5x decrease compute for video reasoning by way of temporal-aware notion and environment friendly video sampling. For multimodal agent workflows, meaning greater throughput and decrease compute overhead with out including complexity to the stack.
The mannequin makes use of a hybrid Combination-of-Specialists structure with a Transformer-Mamba design, together with 3D convolution layers and Environment friendly Video Sampling for temporal and video inputs. It might run on a single H100, H200, or B200, making it sensible to deploy multimodal sub-agents with out stretching infrastructure necessities.
Excessive-Throughput Inference on Clarifai
On Clarifai Reasoning Engine, NVIDIA Nemotron 3 Nano Omni runs at 400+ tokens per second, giving builders the throughput wanted for manufacturing multimodal agent workflows. That issues in techniques the place sub-agents are referred to as repeatedly to course of paperwork, interfaces, audio, and video as a part of an ongoing workflow.
Clarifai Reasoning Engine is constructed for inference acceleration by combining optimized kernels, speculative decoding and adaptive efficiency methods to enhance throughput for reasoning fashions with out compromising accuracy.
Getting Began on Clarifai
Builders can strive NVIDIA Nemotron 3 Nano Omni within the Clarifai Playground and can even entry it by way of an OpenAI-compatible API, making it simpler to combine into present functions, instruments, and agentic frameworks.
For larger-scale or extra managed deployments, Clarifai offers a direct path to manufacturing with Compute Orchestration. Builders can run Nano Omni on Clarifai Reasoning Engine or deploy it throughout their very own cloud, VPC, on-prem or air-gapped environments whereas managing deployments by way of a unified management aircraft.
NVIDIA Nemotron 3 Nano Omni is offered on Clarifai right now.
When you’ve got any questions on accessing NVIDIA Nemotron 3 Nano Omni on Clarifai, be part of our Discord.
