Sopa Images | Light Rocket | Getty Images
Nvidia has established itself as the undisputed leader in artificial intelligence chips, selling significant amounts of silicon to most of the world’s largest technology companies toward a market capitalization of $4.5 trillion.
One of Nvidia’s major customers is: googleThe surge in demand for AI computing power in the cloud has put a strain on chipmakers’ graphics processing units (GPUs).
There’s no sign that Google will slow down its purchases of Nvidia GPUs, but the internet giant is increasingly showing that it’s more than just a buyer of high-powered silicon. It’s also a developer.
Google announced Thursday that its most powerful chip ever, called Ironwood, will be widely available in the coming weeks. This is the seventh generation of Google’s Tensor Processing Unit (TPU), Google’s custom silicon that has been in development for more than a decade.
TPUs are application-specific integrated circuits (ASICs) that play a key role in AI by providing highly specialized and efficient hardware for specific tasks. Google says Ironwood is designed to handle the heaviest AI workloads, from training large models to powering real-time chatbots and AI agents, and is more than four times faster than previous versions. AI startup Anthropic plans to use up to 1 million of them to run Claude models.
For Google, TPUs provide a competitive edge at a time when all the hyperscalers are rushing to build huge data centers and AI processors can’t be manufactured fast enough to meet demand. Other cloud companies are taking similar approaches, but they are far behind.
Amazon Web Services delivered its first cloud AI chip, Inferentia, to customers in 2019, followed by Trainium three years later. Microsoft didn’t announce its first custom AI chip, Maia, until late 2023.
“Google is the only ASIC player that’s actually deployed this product in large numbers,” said Stacey Rasgon, a semiconductor analyst at Bernstein. “For other big companies, it takes a lot of time, a lot of effort, a lot of money. They are the most advanced of the other hyperscalers.”
Google has not commented on the matter.

Google’s TPUs were originally trained for internal workloads and have been available to cloud customers since 2018. Nvidia has shown some concern lately. The Wall Street Journal reports that when OpenAI signed its first cloud deal with Google earlier this year, the announcement prompted Nvidia CEO Jensen Huang to begin further discussions with the AI startup and its CEO Sam Altman.
Unlike Nvidia, Google doesn’t sell chips as hardware, but rather provides access to TPUs as a service through the cloud, and this has emerged as one of the company’s big growth drivers. Google’s parent company, Alphabet, announced in its third-quarter earnings report last week that its cloud revenue rose 34% from a year earlier to $15.15 billion, beating analysts’ expectations. The company ended the quarter with an outstanding balance of $155 billion.
“We see strong demand for our AI infrastructure products, including TPU-based and GPU-based solutions,” CEO Sundar Pichai said in an earnings call. “This has been one of the key drivers of our growth over the past year, and we believe demand will continue to be very strong going forward, and we are investing to meet it.”
Google has not disclosed the size of its TPU business within the cloud segment. Analysts at DA Davidson estimated in September that the “standalone” business comprising TPU and Google’s DeepMind AI unit could be valued at about $900 billion, up from an estimate of $717 billion in January. Alphabet’s current market capitalization is over $3.4 trillion.
“Tightly targeted” chips
Customization is a big differentiator for Google. One key advantage, analysts say, is the greater efficiency TPU offers to customers compared to competing products and services.
“They’re actually making chips that are very tightly targeted to the workloads that they expect to see,” said Tech Insights analyst James Sanders.
Rasgon said efficiency will become increasingly important because given all the infrastructure being built, “the bottleneck is probably not chip supply, but power.”
Google on Tuesday announced Project Suncatcher, which explores “how an interconnected network of solar-powered satellites powered by our Tensor Processing Unit (TPU) AI chips can make the most of the sun’s power.”
As part of the project, Google announced plans to launch two prototype solar-powered satellites equipped with TPUs by early 2027.
“This approach has great potential for scale-up and also has minimal impact on terrestrial resources,” the company said in a statement. “This will test our hardware in orbit and lay the foundation for a future era of large-scale computation in space.”
Dario Amodei, co-founder and CEO of Anthropic, at the 2025 World Economic Forum.
Stefan Vermes | Bloomberg | Getty Images
Google’s largest TPU deal in history closed late last month, when the company announced a major expansion of its contract worth tens of billions of dollars with OpenAI rival Anthropic. With this partnership, Google is expected to bring well over a gigawatt of AI computing power online by 2026.
“Anthropic’s choice to significantly expand its use of TPUs reflects the excellent cost performance and efficiency that our team has seen with TPUs for several years,” Google Cloud CEO Thomas Kurian said in the announcement.
Google invested $3 billion in Anthropic. And while Amazon remains Anthropic’s most deeply embedded cloud partner, Google now provides the core infrastructure to support the next generation Claude model.
“I think this multi-chip strategy is the only way we’ve been able to offer so many services this year because the demand for our model is so high,” Mike Krieger, Anthropic’s chief product officer, told CNBC.
This strategy spans TPUs, Amazon Trainium, and Nvidia GPUs, allowing the company to optimize for cost, performance, and redundancy. Krieger said Anthropic has done a lot of upfront work to ensure its models will work equally well with any silicon provider.
“Now that we’re able to bring these large data centers online and meet our customers where they are, we’re seeing that investment pay off,” Krieger said.
Big expenses are coming
Two months before the Anthropic deal, Google signed a six-year cloud agreement with Anthropic. meta It is not clear how much of the agreement includes the use of TPU, although it is worth more than $10 billion. OpenAI also said it will start using Google’s cloud as it diversifies from Microsoft, but the company told Reuters it has not deployed GPUs.
Alphabet CFO Anat Ashkenazi said Google’s cloud momentum in the latest quarter was due to increased enterprise demand for Google’s full AI stack. The company said it closed more $1 billion in cloud deals in the first nine months of 2025 than in the previous two years combined.
“GCP is seeing strong demand for enterprise AI infrastructure, including TPUs and GPUs,” Ashkenazy said, adding that users are also flocking to the company’s latest Gemini products and services “such as cybersecurity and data analytics.”

Amazon, which reported its market-leading cloud infrastructure business grew 20% last quarter, expressed similar sentiments.
AWS CEO Matt Garman told CNBC in a recent interview that the company’s Trainium chip series is gaining momentum. “All of the Trainium 2 chips currently coming into data centers are sold and in use,” he said, promising further performance gains and efficiency gains with Trainium 3.
Shareholders have shown a willingness to make huge investments.
Google just raised its capex forecast for this year to $93 billion from its previous forecast of $85 billion, with an even steeper increase expected in 2026. The stock soared 38% in the third quarter, its best performance in 20 years, and rose another 17% in the fourth quarter.
Mizuho recently pointed to Google’s clear cost and performance advantages with the TPU, noting that while the chip was originally developed for internal use, Google is now acquiring larger workloads with external customers.
Morgan Stanley analysts wrote in a June report that while Nvidia’s GPUs will likely remain the dominant chip provider in the AI space, increasing developer familiarity with TPUs could become a meaningful driver of growth for Google Cloud.
And analysts at DA Davidson said in September that demand for TPUs is so high that Google should consider selling the system to “external customers,” including Frontier AI Labs.
“We continue to believe that Google’s TPU remains the best alternative to Nvidia, and the gap between the two has narrowed significantly over the past 9-12 months,” they wrote. “During this time, we have seen an increase in positive sentiment towards TPU.”
WATCH: Amazon’s $11 billion data center goes live: Here’s a look inside

