Nvidia on Monday announced new infrastructure and AI models to build the backbone technology for physical AI, such as robots and autonomous vehicles that can perceive and interact with the real world.
The semiconductor giant announced Alpamayo-R1, an open inference vision language model for autonomous driving research, at the NeurIPS AI conference in San Diego, California. According to the company, this is the first vision language action model focused on autonomous driving. Visual language models can process both text and images together, allowing the vehicle to “see” its surroundings and make decisions based on what it sees.
This new model is based on Nvidia’s Cosmos-Reason model, a reasoning model that ponders decisions before responding. Nvidia first released the Cosmos model family in January 2025. Additional models were released in August.
Technologies like Alpamayo-R1 are essential for companies aiming to achieve Level 4 autonomous driving, or fully autonomous driving in defined areas and under certain circumstances, NVIDIA said in a blog post.
Nvidia hopes this type of reasoning model will give self-driving cars the “common sense” that allows them to deal with nuanced driving decisions just as well as humans.
This new model is available on GitHub and Hugging Face.
In addition to the new vision models, Nvidia has uploaded new step-by-step guides, inference resources, and post-training workflows (collectively referred to as the Cosmos Cookbook) to GitHub to help developers better use and train Cosmos models for specific use cases. This guide covers data curation, synthetic data generation, and model evaluation.
tech crunch event
san francisco
|
October 13-15, 2026
These announcements come as the company is moving full speed into physical AI as a new avenue for advanced AI GPUs.
Nvidia co-founder and CEO Jensen Huang has repeatedly stated that the next wave of AI is physical AI. Bill Dally, Nvidia’s chief scientist, echoed that sentiment in a conversation with TechCrunch over the summer, emphasizing physical AI in robotics.
“I think eventually robots are going to be a big player in the world. Basically we want to create the brains of all robots,” Daly said at the time. “To do that, we need to start developing key technologies.”
Catch the latest announcements on everything from agent AI to cloud infrastructure to security and more from Amazon Web Services’ flagship event in Las Vegas. This video is provided in partnership with AWS.
