At CES 2026, Nvidia announced Alpamayo, a new family of open-source AI models, simulation tools, and datasets for training physical robots and vehicles designed to help self-driving cars understand complex driving situations.
“The ChatGPT moment has arrived for physical AI, when machines begin to understand, reason, and act in the real world,” Nvidia CEO Jensen Huang said in a statement. “Alpamayo brings reasoning to self-driving cars, allowing them to think through rare scenarios, drive safely in complex environments, and explain their driving decisions.”
At the core of Nvidia’s new family is Alpamayo 1, a 10 billion parameter chain of thought, reason-based Vision Language Action (VLA) model. This allows AVs to think more like humans and solve complex edge cases, such as how to navigate a traffic light stop at a busy intersection, without any prior experience.
“We do this by breaking down the problem into steps, considering all possibilities, and choosing the safest path forward,” Ali Kani, Nvidia’s vice president of automotive, said at a press conference Monday.
Or, as Huang said in Monday’s keynote, “[Alpamayo]takes input from sensors and not only operates the steering, brakes, and accelerator, but also makes inferences about what actions are going to be taken. It tells us what actions are going to be taken and why those actions occurred. And, of course, it tells us the trajectory.”
The underlying code for Alpamayo 1 is available at Hugging Face. Developers can fine-tune Alpamayo into smaller, faster versions for vehicle development, use it to train simpler driving systems, and build tools on top of it, such as automatic labeling systems that automatically tag video data and evaluators that check whether a vehicle has made wise decisions.
“Cosmos can also be used to generate synthetic data to train and test Alpamayo-based AV applications on a combination of real and synthetic datasets,” Kani said. Cosmos is Nvidia’s brand of generative world models, an AI system that can create representations of the physical environment to perform predictions and actions.
tech crunch event
san francisco
|
October 13-15, 2026
As part of the Alpamayo rollout, Nvidia is also releasing an open dataset containing more than 1,700 hours of driving data collected across a variety of geographies and conditions, covering rare and complex real-world scenarios. The company is also launching AlpaSim, an open-source simulation framework for validating autonomous driving systems. Available on GitHub, AlpaSim is designed to replicate real-world driving conditions, from sensors to traffic conditions, allowing developers to safely test their systems at scale.
