Close Menu
  • Home
  • AI
  • Art & Style
  • Economy
  • Entertainment
  • International
  • Market
  • Opinion
  • Politics
  • Sports
  • Trump
  • US
  • World
What's Hot

Fiorentina 2-1C Palace

April 16, 2026

Google can now explore the web alongside AI mode

April 16, 2026

Stocks with the biggest moves pre-market: PEP, TSM, VOYG, PPG

April 16, 2026
Facebook X (Twitter) Instagram
Smart Breaking News on AI, Business, Politics & Global Trends | WhistleBuzz
Facebook X (Twitter) Instagram
  • Home
  • AI
  • Art & Style
  • Economy
  • Entertainment
  • International
  • Market
  • Opinion
  • Politics
  • Sports
  • Trump
  • US
  • World
Smart Breaking News on AI, Business, Politics & Global Trends | WhistleBuzz
Home » Robotics startup Physical Intelligence says its new robot brain can understand tasks it hasn’t been taught.
AI

Robotics startup Physical Intelligence says its new robot brain can understand tasks it hasn’t been taught.

Editor-In-ChiefBy Editor-In-ChiefApril 16, 2026No Comments6 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Telegram Email Copy Link
Follow Us
Google News Flipboard
Share
Facebook Twitter LinkedIn Pinterest Email


Physical Intelligence, a two-year-old San Francisco-based robotics startup that has quietly become one of the Bay Area’s hottest AI companies, announced new research Thursday showing that its latest model can force robots to perform tasks for which they have not been explicitly trained. The company’s researchers say this feature caught the robots off guard.

The new model, called π0.7, is what the company describes as an early but meaningful step toward its long-sought goal of a general-purpose robot brain that can be told an unfamiliar task, coached in plain language, and actually complete it. If this finding holds up to scrutiny, it suggests that robotic AI may be approaching an inflection point similar to what the field has observed with large-scale language models. There, functionality begins to compound in ways that exceed what the underlying data predicts.

But before that, the core argument of this paper is that of constructive generalization, or the ability of a model to combine skills learned in different contexts to solve problems it has never encountered. Until now, the standard approach to robot training has been essentially memorization. It collects data about a specific task, trains an expert model on that data, and repeats this for each new task. According to Physical Intelligence, π0.7 breaks that pattern.

“Once you cross the threshold from just doing exactly what you collected the data to actually remixing things in new ways, the functionality improves more than proportionally to the amount of data. We’ve seen these favorable scaling properties in other areas, like language and vision,” said Sergey Levine, co-founder of Physical Intelligence and a professor at the University of California, Berkeley, who specializes in AI for robotics.

The paper’s most impressive demonstration involved an air fryer, something the models had essentially never seen in their training. When the research team investigated, they found only two relevant episodes in the entire training dataset. One robot just pushed the fryer shut, another was from an open source dataset, and yet another robot placed a plastic bottle inside the fryer at someone’s direction. The model somehow synthesized these pieces and extensive web-based pre-training data to functionally understand how the appliance would perform.

“It’s very difficult to track where knowledge comes from and where it succeeds or fails,” says Lucy Shi, a Pi researcher and Stanford computer science Ph.D. student. Still, the model made a passable attempt at cooking sweet potatoes using the appliance without any guidance. Step-by-step verbal instructions (basically a human directing the robot to a task, similar to when explaining something to a new employee) got the robot to perform successfully.

This coaching capability is important because it suggests that robots can be introduced to new environments and improved in real-time without additional data collection or model retraining.

So what does that mean? Researchers are careful not to get ahead of themselves and ignore the model’s limitations. In at least one case, they put the blame squarely on their own team.

“Sometimes the failure mode is not present in the robot or model,” Shi says. “It’s our fault. We’re not good at rapid engineering,” she says, describing an early air fryer experiment that yielded a 5% success rate. She says she spent about 30 minutes adjusting how the tasks were explained to the model, and that number jumped to 95%.

Image credit: Bodily Intelligence

The model is also not yet capable of autonomously executing complex multi-step tasks from a single high-level command. “You can’t say, ‘Hey, let’s go have a toast,'” Levine said. “But if you go through the steps of, ‘For the toaster, open this part, press that button, do this,’ it actually tends to work pretty well.”

The team also acknowledged that there are actually no standardized benchmarks for robotics, which makes external validation of their claims difficult. Instead, the company measured π0.7 against its previous expert models (dedicated systems trained for individual tasks) and found that the generalist model’s performance was comparable across a variety of complex tasks, such as making coffee, folding laundry, and assembling boxes.

What’s most notable about this work, if we take the researchers at their word, is not a single demonstration, but how much the results surprised them: people whose job it is to know exactly what is in the training data and, therefore, what the model should and shouldn’t do.

“In my experience, if you have a deep understanding of the data, you can make some inferences about what the model can do,” says Ashwin Balakrishna, a research scientist at Physical Intelligence. “I’m rarely surprised, but the last few months have been the first time I’ve been really surprised. I just bought a random set of gears and asked the robot, ‘Hey, can you turn this gear?'” And it worked. ”

Levine recalled the moment researchers first encountered GPT-2, which gave rise to the story about the Andean unicorn. “Where in Peru did you hear about unicorns?” he says. “It’s a very strange combination. I think it’s really special to see that in robotics.”

Naturally, critics will point out an unpleasant asymmetry here. In other words, the language model had the entire Internet as its learning target. That’s not the case with robots, and no amount of smart prodding can completely bridge that gap. But when asked where the skepticism comes from, Levine pointed to a completely different place.

“A constant criticism of robot generalization demos is that the tasks are boring,” he says. “Robots don’t do backflips,” he argued against that framework, arguing that the distinction between impressive robot demonstrations and robotic systems that actually go viral is exactly what matters. According to him, generalizations seem less dramatic than carefully choreographed stunts, but they are much more useful.

The paper itself uses careful hedging language throughout, describing π0.7 as showing “early signs” of generalization and “an early demonstration” of new functionality. These are research results and not introduced products.

Asked directly when a system based on these findings could be deployed in the real world, Levine declined to speculate. “I think there’s good reason to be optimistic. We’re certainly progressing faster than I expected a few years ago,” he says. “But that question is very difficult for me to answer.”

Physical Intelligence has raised more than $1 billion to date and was recently valued at $5.6 billion. A significant portion of investors’ enthusiasm for the company goes back to co-founder Lachy Groom, who spent years as one of Silicon Valley’s most highly regarded angel investors, backing the likes of Figma, Notion, and Ramp, before deciding that Physical Intelligence was the company he was looking for. This background has helped the startup attract significant institutional funding, even though it has refused to provide investors with a commercialization timeline.

The company is said to be currently in talks for a new round that would nearly double its valuation to $11 billion. The team declined to comment.



Source link

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Editor-In-Chief
  • Website

Related Posts

Google can now explore the web alongside AI mode

April 16, 2026

Anthropic CPO leaves Figma board after reports of offering competing product

April 16, 2026

OpenAI goes Anthropic with enhanced Codex that gives more power to the desktop

April 16, 2026
Add A Comment

Comments are closed.

News

Senate vote on arms sales reveals ‘major rift’ in US support for Israel | Israeli-Palestinian conflict News

By Editor-In-ChiefApril 16, 2026

This week, a vote in the U.S. Senate to block military equipment from being sent…

U.S. House of Representatives approves extension of temporary protection for Haitians after condemning President Trump | Migration News

April 16, 2026

US State Department restricts visas for those who ‘support adversaries’ | Migration News

April 16, 2026
Top Trending

Google can now explore the web alongside AI mode

By Editor-In-ChiefApril 16, 2026

Google announced Thursday that it is rolling out a new way to…

Anthropic CPO leaves Figma board after reports of offering competing product

By Editor-In-ChiefApril 16, 2026

Mike Krieger, Anthropic’s chief product officer, resigned from the board of interface…

OpenAI goes Anthropic with enhanced Codex that gives more power to the desktop

By Editor-In-ChiefApril 16, 2026

There is currently a low-level war between OpenAI and Anthropic over who…

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Welcome to WhistleBuzz.com (“we,” “our,” or “us”). Your privacy is important to us. This Privacy Policy explains how we collect, use, disclose, and safeguard your information when you visit our website https://whistlebuzz.com/ (the “Site”). Please read this policy carefully to understand our views and practices regarding your personal data and how we will treat it.

Facebook X (Twitter) Instagram Pinterest YouTube

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Facebook X (Twitter) Instagram Pinterest
  • Home
  • Advertise With Us
  • Contact US
  • DMCA Policy
  • Privacy Policy
  • Terms & Conditions
  • About US
© 2026 whistlebuzz. Designed by whistlebuzz.

Type above and press Enter to search. Press Esc to cancel.