We are at a unique time for AI companies to build their own foundational models.
First, there are generations of industry veterans who made their names at large technology companies and are now independent. There are also legendary researchers with vast experience but vague commercial aspirations. It’s clear that at least some of these new labs have the potential to become giants on the scale of OpenAI, but there’s also room to do interesting research without worrying too much about commercialization.
The final result? It’s getting harder to tell who’s actually trying to make money.
To make things easier, I’m proposing a kind of sliding scale for companies that create foundational models. It’s a 5-point scale, and it doesn’t matter if you’re actually making money or not. It only applies if you are trying to make money. The idea here is to measure ambition, not success.
Think about it from the following perspective.
Level 5: We are already making millions of dollars every day. thank you. Level 4: We have a detailed multi-step plan to become the wealthiest human beings on Earth. Level 3: We have a lot of promising product ideas that will emerge over time. Level 2: You have an overview of planning concepts. Level 1: True wealth is loving yourself.
Big names like OpenAI, Anthropic, and Gemini are all at Level 5. The new generation of labs now starting up is even more interesting in size and has big dreams, but their ambitions can be hard to read.
Importantly, people who attend these labs are usually able to choose the level they desire. There is so much money in AI right now that no one is going to ask it for a business plan. We think investors would be happy to participate even if the lab is just a research project. Unless you have a particular desire to become a millionaire, you may be happier at level 2 than at level 5.
tech crunch event
san francisco
|
October 13-15, 2026
The problem arises because it’s not always clear where AI labs sit on the scale. And much of the current drama in the AI industry stems from that disruption. Much of the anxiety about OpenAI’s transition from a nonprofit stems from the fact that the institute spent years at Level 1 and then jumped to Level 5 almost overnight. On the other hand, some might argue that Meta’s early AI research was firmly at Level 2, when what Meta really wanted was Level 4.
With that in mind, here’s a quick summary of four of the largest AI labs of our time and how they measure up.
with humans
Humans& was the big AI news this week and was part of the inspiration for this whole scale. The founders make a compelling pitch for next-generation AI models, focusing on communication and coordination tools instead of laws of scaling.
But despite all the enthusiastic press, Humans& has remained tight-lipped about how it will actually translate into a monetizable product. It sounds like you want to build a product. The team just doesn’t promise anything specific. At best, they said, they will develop some kind of AI workplace tool that replaces products like Slack, Jira, and Google Docs, but also redefines how these other tools work at a fundamental level. Post software Workplace software for the workplace!
It’s my job to know what this means, but I’m still pretty confused about the last part. But I think it’s specific enough that it could be classified as level 3.
Thinking Machine Laboratory
This is a very difficult piece to rate! Generally speaking, when the former CTO and project lead of ChatGPT is raising a $2 billion seed round, you should assume they have a pretty concrete roadmap. I don’t think Mira Murati is someone who jumps in without a plan, so I think it would have been a good idea to move TML to level 4 in 2026.
However, in the last two weeks an incident occurred. The resignation of CTO and co-founder Barret Zoph made most of the headlines, due in part to unusual circumstances. However, at least five other employees have left Zoph, many citing concerns about the company’s direction. Just one year later, nearly half of the executives on TML’s founding team no longer work there. One way to read events is that they thought they had a solid plan to become a world-class AI lab, but it turns out that plan wasn’t as solid as they thought. Or, from a scale perspective, we wanted a level 4 lab but found ourselves at level 2 or 3.
There is not yet sufficient evidence to justify a downgrade, but a downgrade is coming.
world lab
Fei-Fei Li is one of the most respected names in AI research, best known for establishing the ImageNet challenge that sparked modern deep learning techniques. She currently holds the Sequoia Endowed Chair at Stanford University, where she co-directs two different AI labs. I’m not going to bore you with the various honors and positions in the academy, but suffice it to say that if she wanted, she could spend the rest of her life just receiving awards and being told how great she is. Her books are also very good!
So when Lee announced in 2024 that he had raised $230 million for a spatial AI company called World Labs, you might think we were operating at Level 2 or below.
But that was over a year ago, which is a long time ago in the world of AI. Since then, World Labs has shipped both complete world generation models and commercial products built on top of them. During the same period, there were real signs of demand for world modeling from both the video game industry and the special effects industry, but none of the major labs were building anything that could compete. The result will be very similar to a Level 4 company and will likely graduate to Level 5 soon.
Safety Super Intelligence (SSI)
Safe Superintelligence (or SSI), founded by former OpenAI chief scientist Ilya Sutskever, seems to be a classic example of a Level 1 startup. Mr. Sutskever went to great lengths to protect SSI from commercial pressures, going so far as to turn down a takeover attempt from Meta. There are no product cycles, and there doesn’t seem to be any product at all, apart from a super-intelligent foundational model that is still being baked. With this proposal, he raised $3 billion. Sutskever has always been more interested in the science of AI than in business, and all signs point to this being a truly scientific project in nature.
That said, the world of AI is moving rapidly and it would be foolish to exclude SSI from the commercial realm entirely. In a recent appearance on Dwarkesh, Satskeva gave two reasons why SSI could change direction: either “that might happen if the timeline turns out to be longer,” or “because there’s a lot of value in having the best and most powerful AI out there impacting the world.” In other words, if a study did very well or very badly, the SSI could rise several levels quickly.
