Open AI CEO Sam Altman speaks during a talk session with SoftBank Group CEO Masayoshi Son at an event titled “Transforming Business with AI” held in Tokyo on February 3, 2025.
Tomohiro Osumi | Getty Images
In November, following Nvidia’s latest earnings release, CEO Jensen Huang touted his company’s position in artificial intelligence to investors, saying of the sector’s hottest startup, “Everything OpenAI does today runs on Nvidia.”
While it’s true that Nvidia maintains its dominant position in AI chips and is currently the world’s most valuable company, competition is emerging and OpenAI is committed to diversification as it pursues historically aggressive expansion plans.
On Wednesday, OpenAI announced a $10 billion deal with chipmaker Cerebras. Cerebras is a relatively early player in this space, but one that is aiming for the public market. This was the latest in a series of deals between OpenAI and companies that make the processors needed to build large language models and run increasingly sophisticated workloads.
Last year, OpenAI announced that Nvidia, advanced micro device and broadcomon its way to achieving a $500 billion private market valuation.
OpenAI is desperate to meet the expected demand for its AI technology, telling the market it needs as much processing power as possible. Here are some of the major chip deals OpenAI has signed as of January, as well as potential partners to keep an eye on.
Nvidia
Nvidia founder and CEO Jensen Huang speaks at Nvidia Live at CES 2026, ahead of the annual Consumer Electronics Show in Las Vegas on January 5, 2026.
Patrick T. Fallon | AFP | Getty Images
OpenAI has relied on Nvidia’s graphics processing units since its early days building large language models, long before the launch of ChatGPT and the beginning of the generative AI boom.
In 2025, that relationship has reached a new level. Following its investment in OpenAI in late 2024, Nvidia announced in September that it would commit $100 billion to support OpenAI in building and deploying at least 10 gigawatts of Nvidia systems.
A gigawatt is a unit of electricity, and 10 gigawatts is roughly equivalent to the annual electricity consumption of 8 million U.S. homes, according to a CNBC analysis of Energy Information Administration data. Huang said in September that 10 gigawatts is equivalent to 4 million to 5 million GPUs.
“This is a huge project,” Huang told CNBC at the time.
OpenAI and Nvidia said the first phase of the project is scheduled to go live on Nvidia’s Vera Rubin platform in the second half of this year. However, NVIDIA said during its quarterly earnings call in November that there is “no guarantee” that the deal with OpenAI will move beyond the announcement to the formal contract stage.
As previously reported by CNBC, NVIDIA’s initial investment of $10 billion will be committed when the first gigawatt is completed, and the investment will be made at the then-current valuation.
AMD
Lisa Su, Chairman and Chief Executive Officer of Advanced Micro Devices Inc., displays the AMD Instinct MI455X GPU at the 2026 CES event in Las Vegas, January 5, 2026.
Bloomberg | Bloomberg | Getty Images
In October, OpenAI announced plans to deploy 6 gigawatts of AMD GPUs into multi-year and multi-generation hardware.
As part of the transaction, AMD is issuing Warrants to OpenAI for up to 160 million shares of AMD common stock, which could represent approximately 10% of the company’s stock. The warrants include vesting milestones related to both the amount introduced and AMD’s stock price.
The companies said they plan to deploy their first gigawatt-class chips in the second half of 2026, adding that the deal is worth billions of dollars, without disclosing a specific amount.
AMD CEO Lisa Su told CNBC at the time of the announcement, “We need partnerships like this that really unify the ecosystem to really make the best technology available.”
Altman planted the seeds for the deal in June when he appeared on stage with Hsu at an AMD launch event in San Jose, California. He said OpenAI plans to use AMD’s latest chips.
broadcom
Broadcom CEO Hock Tan.
lucas jackson reuter
Later that month, OpenAI and Broadcom publicly announced a collaboration that had been in the works for more than a year.
Broadcom calls its custom AI chips XPUs and has historically relied on a small number of customers. But its pipeline of potential deals has caused a lot of excitement on Wall Street, with Broadcom now valued at more than $1.6 trillion.
OpenAI said it is designing its own AI chips and systems, which Broadcom will develop and sell. The companies have agreed to deploy 10 gigawatts of these custom AI accelerators.
In an October release, the companies said Broadcom aims to begin deploying racks of AI accelerators and network systems by the second half of this year, with the goal of completing the project by the end of 2029.
However, Broadcom CEO Hock Tan told investors during the company’s quarterly earnings call in December that the company doesn’t expect to see much revenue from the OpenAI partnership in 2026.
“We appreciate the fact that this is a multi-year journey that will continue until 2029,” Tan said. “I call this an agreement, an alignment of where we want to go.”
OpenAI and Broadcom did not disclose financial terms of the deal.
cerebrum
Andrew Feldman, co-founder and CEO of Cerebras Systems, speaks at the Collision conference in Toronto on June 20, 2024.
Ramsey Cardi Sports File | Collision | Getty Images
OpenAI on Wednesday announced a deal to deploy Cerebras’ 750 megawatts of AI chips that will come online in multiple tranches through 2028.
According to the release, Cerebras is building large wafer-scale chips that can respond up to 15 times faster than GPU-based systems. The company is much smaller than Nvidia, AMD, and Broadcom.
OpenAI’s deal with Cerebras is worth more than $10 billion and could be a boon for chipmakers considering a possible public market debut.
“We are excited to partner with OpenAI and bring world-leading AI models to the world’s fastest AI processors,” Cerebras CEO Andrew Feldman said in a statement.
Cerebras is in dire need of strong customers. The company withdrew its IPO plans in October, days after announcing it had raised more than $1 billion in a funding round. The company filed to go public a year ago, but its prospectus revealed that it relies heavily on a single customer in the United Arab Emirates, G42, which is backed by Microsoft, which is also an investor in Cerebras.
potential partner
OpenAI CEO Sam Altman speaks during a media tour of the Stargate Data Center in Abilene, Texas, on September 23, 2025. Stargate is a joint venture between OpenAI, Oracle and SoftBank that is building data centers and other artificial intelligence infrastructure across the United States with promotional support from President Donald Trump.
Kyle Grillot | Bloomberg | Getty Images
where does it go Amazon, google And does Intel have their own AI chips?
In November, OpenAI signed a $38 billion cloud deal. Amazon Web Services. OpenAI will run workloads through existing AWS data centers, but the cloud provider also plans to build additional infrastructure for the startup as part of the deal.
As previously reported by CNBC, Amazon is also in talks to potentially invest more than $10 billion in OpenAI.
OpenAI may decide to use Amazon’s AI chips as part of these investment talks, but nothing has been officially decided, said a person familiar with the matter, who requested anonymity because the discussions are confidential.
AWS introduced Inferentia chips in 2018 and the latest generation of Trainium chips in late 2025.
google cloud It also supplies computing power to OpenAI, thanks to a deal secretly signed last year. But OpenAI said in June that it had no plans to use Google’s in-house chip, called a tensor processor, which Broadcom also helps produce.
Intel is the furthest behind traditional chipmakers in AI, which explains why the company recently received huge investments from the US government and Nvidia. Reuters reported in 2024, citing people familiar with the discussions, that Intel had been investing in OpenAI for years and had a chance to potentially manufacture hardware for the then-fledgling startup, giving it a way to avoid dependence on Nvidia.
According to Reuters, Intel has decided against the acquisition.
In October, Intel announced a new data center GPU codenamed Crescent Island that it says is “designed to meet the growing demands of AI inference workloads, offering high memory capacity and energy-efficient performance.” The company said “customer sampling” is scheduled for the second half of 2026.
Wall Street will hear updates on Intel’s latest AI efforts next week as the company kicks off its tech earnings season.
—CNBC’s Kiff Leswing, Mackenzie Cigalos and Jordan Nove contributed to this report.
WATCH: A deep dive into AI chips, from Nvidia GPUs to ASICs from Google and Amazon

