Close Menu
  • Home
  • AI
  • Art & Style
  • Economy
  • Entertainment
  • International
  • Market
  • Opinion
  • Politics
  • Sports
  • Trump
  • US
  • World
What's Hot

Air India, which supports Singapore Airlines, will be forced to cancel 27% of its international flights due to Iran war

May 14, 2026

Gemini soars after Winklevoss Capital invests $100 million in crypto exchange

May 14, 2026

Kash Patel appeals dismissal of Figliuzzi defamation lawsuit

May 14, 2026
Facebook X (Twitter) Instagram
Smart Breaking News on AI, Business, Politics & Global Trends | WhistleBuzz
Facebook X (Twitter) Instagram
  • Home
  • AI
  • Art & Style
  • Economy
  • Entertainment
  • International
  • Market
  • Opinion
  • Politics
  • Sports
  • Trump
  • US
  • World
Smart Breaking News on AI, Business, Politics & Global Trends | WhistleBuzz
Home » What happens when AI starts building itself?
AI

What happens when AI starts building itself?

Editor-In-ChiefBy Editor-In-ChiefMay 14, 2026No Comments6 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Telegram Email Copy Link
Follow Us
Google News Flipboard
Share
Facebook Twitter LinkedIn Pinterest Email


Richard Socher has long been a leading figure in AI, best known for founding early chatbot startup You.com and for his earlier work at ImageNet. He now joins the current generation of research-focused AI startups with Recursive Superintelligence, a San Francisco-based startup that emerged from stealth with $650 million in funding on Wednesday.

Socher joins prominent AI researchers in this new venture, including Peter Norvig and Cresta co-founder Tim Shi. They work together to create AI models that recursively self-improve. The model can autonomously identify its weaknesses and redesign itself to fix them without human intervention. This is a long-standing goal of modern AI research.

I spoke with him over Zoom after the launch to dig into Recursive’s unique technical approach and why he doesn’t think of this new project as a neolab (an informal term for a new generation of AI startups that prioritize research over building products).

This interview has been edited for length and clarity.

I’ve been hearing a lot about recursion lately. This feels like a very common goal across different labs. What do you think is your unique approach?

Our unique approach is to leverage open-endedness to reach recursive self-improvement that no one has yet achieved. It’s an elusive goal for many. Many people already assume that just doing automated research will make it happen. You can ask AI to improve other things. It could be a machine learning system, or just a letter you wrote, or anything else. But it’s not reflexive self-improvement. It’s just an improvement.

Our primary focus is building a truly recursive, self-improving superintelligence at scale. This means that the entire process of ideation, implementation, and validation of research ideas is automated.

First, AI research ideas (automation), and eventually all kinds of research ideas, and eventually even in the physical domain. But it becomes especially powerful when the AI ​​is working on itself and developing a new kind of sense of self-awareness about its own shortcomings.

You used the word open-ended, does it have any special technical meaning?

That’s right. In fact, one of our co-founders, Tim Rocktäschel, led the open-endedness and self-improvement team at Google DeepMind, specifically working on the world model Genie 3, which is a great example of open-endedness. You can convey any concept, any world, any agent, and it’s interactive just by creating it.

In biological evolution, animals adapt to their environments and then other animals counteradapt to those adaptations. It’s just a process that can evolve over billions of years and interesting things keep happening, right? That’s how we developed eyes (in our heads).

Another example is rainbow teaming from another of Tim’s papers. Have you heard of Red Team?

In cybersecurity, that means –.

Therefore, red teaming should also be done in an LLM context. Basically, I want the LLM to teach me how to make a bomb so that it doesn’t happen.

Now humans can sit there for a long time and come up with interesting examples of what the AI ​​should not say. But what if you were to test this first AI with a second AI, and that second AI was tasked with (trying to) make the first AI say every possible bad thing? And you can go back and forth through millions of iterations.

You can actually co-evolve two AIs. When one side keeps attacking the other, they come up with many different angles instead of just one, hence the rainbow analogy. And now that we can get the first AI inoculation, it’s going to become increasingly safer. This was Tim Rocktaeschel’s idea and is now used in all major laboratories.

How do I know it’s done? I guess it’s never been done.

Some of these things will never be done. You can always be smarter. You can always improve at things like programming and mathematics. Intelligence has certain limits. I’m actually trying to formalize these, but the numbers are astronomical. We are far from that limit.

I feel that Neolab should do something that the major labs are not doing. So part of the implication here is that big research institutions don’t think they’ll reach RSI (recursive self-improvement) by doing what they do. Is that fair?

I can’t really comment on what they’re doing, but I think we’re taking a different approach. We really embrace the concept of open-endedness, and our team is completely focused on that vision. And the team has been studying this and publishing papers in this area for the past 10 years. And the team has a track record of making significant advances in this space and shipping real products. As you know, Tim See made the Cresta into a unicorn. Josh Tobin was one of the first people at OpenAI and eventually led the Codex and Deep Research teams.

Actually, I sometimes have a little trouble with this neolab category. I feel like we are more than just a research lab. We want to be a truly viable company and provide great products that people use and love and that have a positive impact on humanity.

So when do you plan to ship your first product?

I thought about it a lot. The team has made so much progress that they may actually push up the timeline they originally envisioned. But yes, the product will come, but we will have to wait several quarters, not years.

One way of thinking about recursive self-improvement is that when you deploy this kind of system, computing becomes the only resource that matters. The faster your system runs, the faster it will improve. There is no outside human activity that actually makes a difference. So the competition is how much processing power you can put into it. Do you think that’s the world we’re heading towards?

Never underestimate computing. In the future, I think a very important question will be how much computing power do humans want to use to solve which problems? This is cancer, this is virus. Which one do you want to solve first? How much computing power do you want to give it? Ultimately it comes down to resource allocation. That would be one of the biggest questions in the world.

If you buy through links in our articles, we may earn a small commission. This does not affect editorial independence.



Source link

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Editor-In-Chief
  • Website

Related Posts

OpenAI announces Codex is coming to your phone

May 14, 2026

Elon Musk’s SpaceXAI has been hemorrhaging staff since merger

May 14, 2026

Khosla Ventures bets $10 million on Ian Crosby, whose first startup Bench went bankrupt

May 14, 2026
Add A Comment

Comments are closed.

News

Border Patrol Chief Mike Banks resigns over President Trump’s immigration reforms | Donald Trump News

By Editor-In-ChiefMay 14, 2026

Several high-profile figures have left the Department of Homeland Security, including Todd Lyons and Kristi…

Trump administration promises $1.8 billion in additional humanitarian aid to the United Nations | Donald Trump News

May 14, 2026

Jerome Powell: Navigating the US Fed through COVID-19 and political pressure | Banking News

May 14, 2026
Top Trending

What happens when AI starts building itself?

By Editor-In-ChiefMay 14, 2026

Richard Socher has long been a leading figure in AI, best known…

OpenAI announces Codex is coming to your phone

By Editor-In-ChiefMay 14, 2026

Codex goes mobile. The coding tool, released by OpenAI about a year…

Elon Musk’s SpaceXAI has been hemorrhaging staff since merger

By Editor-In-ChiefMay 14, 2026

Elon Musk’s newly rebranded SpaceXAI is reportedly losing top talent, with more…

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Welcome to WhistleBuzz.com (“we,” “our,” or “us”). Your privacy is important to us. This Privacy Policy explains how we collect, use, disclose, and safeguard your information when you visit our website https://whistlebuzz.com/ (the “Site”). Please read this policy carefully to understand our views and practices regarding your personal data and how we will treat it.

Facebook X (Twitter) Instagram Pinterest YouTube

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Facebook X (Twitter) Instagram Pinterest
  • Home
  • Advertise With Us
  • Contact US
  • DMCA Policy
  • Privacy Policy
  • Terms & Conditions
  • About US
© 2026 whistlebuzz. Designed by whistlebuzz.

Type above and press Enter to search. Press Esc to cancel.