SAN FRANCISCO — Inside Anthropic headquarters, president and co-founder Daniela Amodei repeats over and over again a phrase that has become a kind of governing principle for the artificial intelligence startup’s entire strategy: “Do more with less.”
This is a direct challenge to the prevailing atmosphere throughout Silicon Valley, where the largest research institutions and their supporters treat size as destiny.
Companies have been raising record amounts, securing chips and pouring concrete for data centers in America’s heartlands for years, believing that the company that builds the biggest intelligence factory will win.
OpenAI has become the clearest example of that approach.
The company is making a roughly $1.4 trillion commitment to headline compute and infrastructure as it works with partners to ramp up massive data center campuses and secure next-generation chips at a pace the industry has never seen before.
Anthropic’s pitch is that there is another way to win the race. It’s how disciplined spending, algorithmic efficiency, and smarter deployment can help you stay on the frontier without trying to outdo everyone else.
“I think what we’ve always strived for at Anthropic is to be as careful as possible with the resources that we have while continuing to operate in this space that just requires a lot of computing,” Amodei told CNBC. “Anthropic has always had a fraction of our competitors in terms of compute and capital, yet we have almost consistently delivered the most powerful and best-performing models for the better part of the past few years.”

Daniela Amodei and her brother Dario Amodei, CEO of Anthropic and alumnus of Baidu and Google, helped build the very worldview they are betting on today.
Dario Amodei is one of the researchers who helped popularize the scaling paradigm that has guided modern model competition. This is a strategy that tends to improve models in a predictable way by increasing compute, data, model size, and functionality.
This pattern is effectively the economic basis for the AI arms race.
It underwrites the capital expenditures of hyperscalers, legitimizes soaring chip valuations, and keeps the private market motivated to command huge prices for companies that are still spending heavily to achieve profitability.
But while Anthropic benefits from that logic, the company is trying to prove that the next stage of competition isn’t just determined by who can afford the most up-front training.
Its strategy focuses on selecting high-quality training data, post-training techniques that improve inference, and products designed to lower model execution costs and facilitate large-scale deployment. This is the part of the AI business where computing costs never stop.
To be clear, Anthropic is not operating on a tight cash flow. The company has approximately $100 billion in computing commitments, and these requirements are expected to continue to increase if it wants to stay on the frontier.
“Future computing requirements will be huge,” says Daniela Amodei. “So our expectation is that, yes, as we grow we’ll need more computing to continue to stay at the frontier.”
Still, the company argues that headline numbers in the industry are often not directly comparable, making industry-wide certainty about the “right” amount to spend less certain than it might seem.
“Because of how some of these deals are structured, a lot of the numbers that get thrown around are not quite the same,” she said, describing an environment where players feel pressured to commit early to secure hardware that is years away.
The bigger truth, she added, is that even the insiders who helped develop the scaling theory are surprised by the consistent compounding of business performance and business growth.

Daniela Amodei said: “Even as those who pioneered the belief in expanding the law, we have continued to be surprised.” “What I often hear from my colleagues is that exponential growth continues until it doesn’t stop. And every year we thought, ‘Well, there’s no way things can continue to go exponential,’ and then every year they do.”
This line captures both the optimism and anxiety of today’s economic growth.
If the exponential rise continues, the companies that locked on to power, chips, and sites early may appear to be prescient. If that breaks down, or if adoption lags the pace of capabilities, overcommitted players can be left with years of fixed costs and long lead-time infrastructure built for demand that never arrives.
Daniela Amodei distinguished between technological and economic curves, but this important nuance is often confused in public discussion.
From a technology perspective, she said Anthropic doesn’t expect progress to slow based on what it’s observed so far. A further complicating question is how quickly businesses and consumers can integrate these capabilities into their actual workflows, when procurement, change management, and human friction can make even the best tools time-consuming.
“No matter how good the technology is, it takes time for it to be used in business and personal situations,” she says. “The real question for me is how quickly businesses in particular, but also individuals, can take advantage of this technology.”
This enterprise focus is at the heart of why Anthropic has emerged as a leader in the broader generative AI industry.
The company has positioned itself as an enterprise-first model provider, and much of its revenue is tied to other companies paying to incorporate Claude into their workflows, products, and internal systems, and its usage may be more obsessive than consumer apps, leading to higher churn rates as the novelty wears off.

Anthropic said revenue has increased 10 times year over year for the third consecutive year. And in a market defined by fierce competition, it has built an unusual distribution scale. Cloud models are available across major cloud platforms, including through partners who build and sell competing models.
Daniela Amodei sees its presence less as a détente and more as a reflection of customer pull, with large enterprises looking for cloud-wide options and cloud providers wanting to offer what their biggest customers want to buy.
In fact, its multi-cloud attitude is also a way to compete without making a single bet on your infrastructure.
Where OpenAI seeks to entrench large-scale builds around bespoke campuses and dedicated capacity, Anthropic seeks to remain flexible, changing execution locations based on cost, availability, and customer demand, while focusing internal energy on improving model efficiency and performance per unit of compute.
As 2026 begins, this chasm will be important for another reason. Both companies are required to have the discipline to serve public markets while operating in a world of private markets where computing needs grow faster than certainty.
Anthropic and OpenAI have not announced IPO schedules, but both are making moves to add financials, governance, forecasting, and operating rhythms that can withstand public scrutiny.
At the same time, both companies are still raising new capital and signing ever-larger computing deals to fund the next stage of model development.
This sets a real test of strategy, not rhetoric.
If the market maintains the scale of funding, OpenAI’s approach is likely to remain the industry standard. Anthropic’s “doing more with less” attitude could give investors an advantage if they start demanding greater efficiency.
In that sense, Anthropic’s contrarian bet isn’t that scaling doesn’t work. It’s not just scale that matters: the winners of the next phase may be those labs that can continue to improve while spending in a way that sustains the real economy.
“Exponential growth continues until it doesn’t,” Daniela Amodei said. The question in 2026 is what will happen to the AI arms race and the companies building it if the industry popularity curve finally stops working.
Attention: The conflict between the human world and OpenAI goes global

