Nvidia released a new stack of robot foundation models, simulation tools, and edge hardware at CES 2026. It’s a move that signals the company’s ambition to become the default platform for generalist robotics, much like Android became the operating system for smartphones.
Nvidia’s move into robotics reflects a broader industry shift as AI moves from the cloud to machines that can learn how to think in the physical world, enabled by cheaper sensors, advanced simulations, and AI models that can generalize across tasks.
Nvidia on Monday revealed details of its full stack ecosystem for physical AI. This includes a new open foundation model that allows robots to reason, plan and adapt across many tasks and diverse environments, all available in Hugging Face beyond narrow task-specific bots.
These models include Cosmos Transfer 2.5 and Cosmos Predict 2.5. This is a two-world model for synthetic data generation and robot policy evaluation in simulation. Cosmos Reason 2, a reasoning vision language model (VLM) that allows AI systems to see, understand, and act on the physical world. Isaac GR00T N1.6 is a next generation Vision Language Action (VLA) model built specifically for humanoid robots. GR00T relies on Cosmos Reason as its brain, allowing it to unlock humanoid full-body control to move and process objects simultaneously.
Nvidia also introduced Isaac Lab-Arena at CES. It is an open-source simulation framework hosted on GitHub that serves as another component of the company’s physical AI platform, enabling secure virtual testing of robot functionality.
The platform promises to address key industry challenges. As robots learn increasingly complex tasks, from precision object handling to cable installation, validating these abilities in physical environments can be costly, time-consuming, and risky. Isaac Lab-Arena addresses this problem by integrating resources, task scenarios, training tools, and established benchmarks such as Libero, RoboCasa, and RoboTwin to create a uniform standard that has been lacking in the industry to date.
Supporting the ecosystem is Nvidia OSMO, an open source command center that serves as the connectivity infrastructure that integrates the entire workflow from data generation to training in both desktop and cloud environments.
tech crunch event
san francisco
|
October 13-15, 2026
And to power everything up is the newest member of the Thor family, the new Jetson T4000 graphics card powered by Blackwell. Nvidia is touting it as a cost-effective on-device computing upgrade that delivers 1200 teraflops of AI computing and 64 gigabytes of memory while efficiently running between 40 and 70 watts.
Nvidia is also deepening its partnership with Hugging Face to enable more people to experiment with robotic training without the need for expensive hardware or expertise. The partnership integrates Nvidia’s Isaac and GR00T technologies into Hugging Face’s LeRobot framework, connecting Nvidia’s 2 million robot developers with Hugging Face’s 13 million AI builders. The developer platform’s open source Reachy 2 humanoid now works directly with Nvidia’s Jetson Thor chip, allowing developers to experiment with different AI models without being tied to proprietary systems.
The big picture here is that Nvidia is trying to make robot development more accessible and wants to be the underlying hardware and software vendor that drives robot development, much like Android is the default for smartphone manufacturers.
There are early signs that Nvidia’s strategy is working. Robotics is the fastest growing category on Hugging Face, with Nvidia models leading in downloads. Meanwhile, robotics companies are already using Nvidia’s technology, from Boston Dynamics and Caterpillar to Franka Robots and NEURA Robotics.
See TechCrunch’s full coverage of the annual CES conference here.
