
Qualcomm On Monday, it announced the release of a new artificial intelligence accelerator chip, bringing new competition. Nvidiahas dominated the AI semiconductor market so far.
The stock price rose 15% on the news.
The AI chip is a shift from Qualcomm, which has traditionally focused on semiconductors for wireless connectivity and mobile devices rather than large-scale data centers.
According to Qualcomm, both the AI200, scheduled for release in 2026, and the AI250, scheduled for release in 2027, can be installed in systems that fill entire liquid-cooled server racks.
Qualcomm competes with Nvidia. AMDoffers graphics processing units (GPUs) in full rack systems, allowing up to 72 chips to function as a single computer. AI labs need computing power to run cutting-edge models.
Qualcomm’s data center chips are based on an AI component in Qualcomm’s smartphone chips called hexagonal neural processing units (NPUs).
“At first we wanted to prove ourselves in other areas, but once we built our strength there, it was very easy to take the next step up to the data center level,” Durga Maradi, Qualcomm’s general manager of data center and edge, said on a call with reporters last week.
Qualcomm’s entry into the data center world means new competition in the fastest growing technology market: equipment for new AI-focused server farms.
McKinsey estimates that nearly $6.7 trillion in capital investment will be spent on data centers by 2030, with the majority going to systems based on AI chips.
The industry is dominated by Nvidia, whose GPUs have commanded more than 90% of the market and whose sales have given the company a market capitalization of more than $4.5 trillion. Nvidia’s chips were used to train OpenAI’s GPT, a large-scale language model used in ChatGPT.
But companies like OpenAI are exploring alternatives, and earlier this month the company announced plans to buy chips from AMD, the second-largest GPU maker, and potentially take a stake in the company. Other companies, e.g. google, Amazon and microsoftis also developing its own AI accelerator for cloud services.
Qualcomm said its chips focus on inference, or running AI models, rather than training, and that labs like OpenAI are processing terabytes of data to develop new AI capabilities.
The chipmaker said its rack-scale systems will ultimately be cheaper to operate for customers such as cloud service providers, with the racks consuming 160 kilowatts, which is comparable to the high power draw from some Nvidia GPU racks.
Maradi said Qualcomm plans to sell AI chips and other components separately, especially to customers such as hyperscalers who prefer to design their own racks. He said other AI chip companies such as Nvidia and AMD could even become customers for some of Qualcomm’s data center components, such as central processing units (CPUs).
“What we’ve tried to do is allow customers to either take everything or say, ‘Let’s mix and match,'” Maradi said.
The company declined to comment on the prices of the chips, cards, racks, or the number of NPUs that can be installed in a single rack. In May, Qualcomm announced a partnership with Saudi Arabia’s Humane, which will supply AI inference chips to data centers in the region, and the company will become a Qualcomm customer, pledging to deploy as many systems as possible that can use 200 megawatts of power.
Qualcomm said its AI chips have advantages over other accelerators in terms of power consumption, cost of ownership and new approaches to how memory is handled. The company’s AI card supports 768 gigabytes of memory, which the company says is more than products from Nvidia and AMD.
Qualcomm’s AI server design called AI200.
Qualcomm
Qualcomm’s daily stock price chart.
 
									 
					