Close Menu
  • Home
  • AI
  • Art & Style
  • Economy
  • Entertainment
  • International
  • Market
  • Opinion
  • Politics
  • Sports
  • Trump
  • US
  • World
What's Hot

James Wade: Winning the World Darts Championship would cement my legacy in the sport | Darts News

December 17, 2025

Google launches Gemini 3 Flash, making it the default model for Gemini apps

December 17, 2025

Japan’s exports far exceeded expectations, recording the highest growth rate in nine months

December 17, 2025
Facebook X (Twitter) Instagram
WhistleBuzz – Smart News on AI, Business, Politics & Global Trends
Facebook X (Twitter) Instagram
  • Home
  • AI
  • Art & Style
  • Economy
  • Entertainment
  • International
  • Market
  • Opinion
  • Politics
  • Sports
  • Trump
  • US
  • World
WhistleBuzz – Smart News on AI, Business, Politics & Global Trends
Home » Compare top AI chips
US

Compare top AI chips

Editor-In-ChiefBy Editor-In-ChiefNovember 21, 2025No Comments10 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Telegram Email Copy Link
Follow Us
Google News Flipboard
Share
Facebook Twitter LinkedIn Pinterest Email


A closer look at AI chips, from Nvidia GPUs to ASICs from Google and Amazon

Nvidia exceeded all expectations, reporting a sharp jump in profits on Wednesday thanks to its graphics processing unit, which excels in AI workloads. But more categories of AI chips are becoming popular.

Custom ASICs, or application-specific integrated circuits, are currently being designed by all major hyperscalers: googleof TPU AmazonPlans for Trainium and OpenAI broadcom. These chips are small, cheap, and readily available, potentially reducing these companies’ dependence on Nvidia GPUs. Futurum Group’s Daniel Newman told CNBC that he expects custom ASICs to “grow even faster than the GPU market over the next few years.”

In addition to GPUs and ASICs, there are also field-programmable gate arrays that can be created and then reconfigured in software for use in all kinds of applications, including signal processing, networking, and AI. There’s also a whole group of AI chips that power AI on devices rather than in the cloud. Qualcomm, apple and others are favoring AI chips on devices.

CNBC spoke to experts and insiders at Big Tech companies to analyze the crowded space and the different types of AI chips that reside there.

GPU for general computing

GPUs, once primarily used for gaming, have made Nvidia the world’s most valuable publicly traded company after their use shifted to AI workloads. Nvidia has shipped approximately 6 million current-generation Blackwell GPUs over the past year.

On November 12, 2025 at Nvidia headquarters in Santa Clara, California, NVIDIA Senior Director of AI Infrastructure Dion Harris shows CNBC’s Katie Tarasoff how 72 Blackwell GPUs work together as one in the GB200 NVL72 rack-scale server system for AI.

mark ganley

The transition from gaming to AI began around 2012, when researchers built AlexNet using Nvidia’s GPUs. Many believe this is the big bang moment for modern AI. AlexNet is a tool that has participated in prominent image recognition competitions. While other contestants used central processing units for their applications, AlexNet relied on GPUs, providing incredible accuracy and eliminating competition.

AlexNet’s creators discovered that the same parallelism that helps GPUs render lifelike graphics is also ideal for training neural networks, where computers learn from data rather than relying on programmers’ code. AlexNet introduced the potential of GPUs.

Today, GPUs are sold in combination with CPUs in server rack systems, installed in data centers, and run AI workloads in the cloud. CPUs have a small number of powerful cores that perform sequential, general-purpose tasks, whereas GPUs have thousands of smaller, narrower cores that specialize in parallel operations such as matrix multiplication.

GPUs can perform many operations simultaneously, making them ideal for training and inference, the two main phases of AI computation. Training teaches AI models to learn from patterns in large amounts of data, while inference uses AI to make decisions based on new information.

GPUs are the general-purpose workhorse of Nvidia and its biggest competitors. advanced micro device. The main differentiator between the two GPU readers is the software. Nvidia GPUs are tightly optimized around Nvidia’s proprietary software platform, CUDA, while AMD GPUs primarily use an open-source software ecosystem.

AMD and Nvidia sell their GPUs to cloud providers like Amazon. microsoftgoogle, oracle and coreweave. Those companies rent GPUs by the hour or minute to AI companies. For example, Anthropic’s $30 billion deal with Nvidia and Microsoft includes 1 gigawatt of computing power for Nvidia GPUs. AMD also recently secured major commitments from OpenAI and Oracle.

Nvidia also sells directly to AI companies and foreign governments, including South Korea, Saudi Arabia, and the United Kingdom, such as a recent deal to sell at least 4 million GPUs to OpenAI.

The chipmaker told CNBC it charges about $3 million for a server rack with 72 Blackwell GPUs working as one, and ships about 1,000 each week.

Dion Harris, Nvidia’s senior director of AI infrastructure, told CNBC that he couldn’t have imagined this level of demand when he joined Nvidia more than eight years ago.

“When we talked to people about building a system with eight GPUs, they thought that was overkill,” he said.

ASIC for custom cloud AI

Training on GPUs was key in the early days of the large-scale language model boom, but as models mature, inference becomes more important. Inference can also be performed on less powerful chips programmed for more specific tasks. That’s where ASICs come in.

GPUs are like Swiss Army knives that can perform different kinds of parallel computations for different AI workloads, whereas ASICs are like single-purpose tools. It’s very efficient and fast, but it’s built in to do precise calculations for certain types of jobs.

Google released its 7th generation TPU, Ironwood, in November 2025, 10 years after creating its first custom ASIC for AI in 2015.

google

“Once a chip is carved into silicon, it cannot be changed, so there is a trade-off in flexibility,” said Chris Miller, author of “The Chip Wars.”

Nvidia’s GPUs offer the flexibility that many AI companies can adopt, but they can cost up to $40,000 and can be difficult to obtain. Still, startups rely on GPUs because custom ASIC designs come with higher upfront costs, starting in the tens of millions of dollars, Miller said.

Analysts say custom ASICs will be profitable in the long run for the largest cloud providers who can afford them.

“They want a little more control over the workloads they build,” Newsom said. “At the same time, they’re going to continue to work closely with Nvidia and AMD because they also need capacity. The demand is very insatiable.”

Google was the first big tech company to create custom ASICs for AI acceleration, coining the term Tensor Processing Unit when the first ASICs appeared in 2015. Google said it was considering creating TPUs as early as 2006, but the situation became “urgent” in 2013 when it realized that AI would double the number of data centers. In 2017, TPU also helped Google invent Transformer. Transformer is the architecture that powers nearly all modern AI.

Ten years after the first TPU, Google released its 7th generation TPU in November. Anthropic announced that it will train LLM Claude with up to 1 million TPUs. Miller said some people believe that TPUs are technically as good or better than Nvidia’s GPUs.

“Traditionally, Google has only used these for internal purposes,” Miller said. “Longer term, there is a lot of speculation that Google may open up access to TPUs more broadly.”

Amazon Web Services became the next cloud provider to design its own AI chips after acquiring Israeli chip startup Annapurna Labs in 2015. AWS announced Inferentia in 2018 and launched Trainium in 2022. AWS also plans to announce the third generation of Trainium in December.

Ron Diamant, principal architect at Trainium, told CNBC that Amazon’s ASICs offer 30% to 40% better price performance compared to other hardware vendors on AWS.

“Over time, we have found that Trainium chips can handle both inference and training workloads very well,” Diamant said.

CNBC’s Katie Tarasoff holds Amazon Web Services’ Trainium 2 AI chip embedded in its new AI data center in New Carlisle, Indiana, Oct. 8, 2025.

erin black

In October, CNBC traveled to Indiana for the first-ever camera tour of Amazon’s largest AI data center. There, Anthropic trains its models on 500,000 Trainium2 chips. AWS is equipping other data centers with Nvidia GPUs to meet demand from AI customers such as OpenAI.

Building an ASIC is not easy. This is why companies rely on chip designers broadcom and marvel. Miller said the company “provides the IP, know-how and networking” to help clients build ASICs.

“We found that Broadcom in particular was one of the biggest beneficiaries of the AI ​​boom,” Miller said.

Broadcom helps Google build TPUs, meta‘s Training and Inference Accelerator will launch in 2023, with a new contract to help OpenAI build its own custom ASICs starting in 2026.

Microsoft is also getting into the ASIC game, telling CNBC that its in-house Maia 100 chips are currently being deployed in data centers in the eastern United States. In addition, Qualcomm equipped with A1200, intel Gaudi AI accelerator and tesla Equipped with AI5 chip. There are also a number of startups going all-in on custom AI chips, including Cerebras, which makes giant full-wafer AI chips, and Groq, which has language processing units focused on inference.

In China, Huawei, ByteDance, and Alibaba manufacture custom ASICs, but export regulations for cutting-edge equipment and AI chips are an issue.

Edge AI with NPU and FPGA

The final big category of AI chips are chips that are made to run on a device rather than in the cloud. These chips are typically integrated into the device’s main System on a Chip (SoC). Edge AI chips, as the name suggests, enable devices to include AI capabilities while saving battery life and space for other components.

“You can do it right on your phone with very low latency, so you don’t have to go all the way to a data center,” said Saif Khan, a former White House AI and semiconductor policy adviser. “And you can keep the data on your phone private.”

Neural processing units are the primary type of edge AI chips. Qualcomm, Intel, and AMD make NPUs that enable AI capabilities in personal computers.

Although Apple doesn’t use the term NPU, the company’s M-series chips in MacBooks include a dedicated neural engine. Apple has also included neural accelerators in its latest iPhone A-series chips.

“This is efficient for us, it’s responsive, and we know we have much more control over the experience,” Tim Millett, Apple’s vice president of platform architecture, told CNBC in an exclusive interview in September.

Modern Android smartphones also have NPUs built into their flagship Qualcomm Snapdragon chips, and Samsung’s Galaxy smartphones also have their own NPUs. NPUs from companies like NXP and Nvidia power AI embedded in cars, robots, cameras, smart home devices, and more.

“Most of the funding is going to data centers, but that’s going to change over time because AI is going to be deployed in mobile phones, cars, wearables, and all kinds of other applications to a much higher degree than it is today,” Miller said.

Additionally, there are field programmable gate arrays (FPGAs) that can be reconfigured in software after they are created. FPGAs are much more flexible than NPUs and ASICs, but have lower raw performance and energy efficiency for AI workloads.

AMD acquired Xilinx in 2022 for $49 billion, making it the largest FPGA maker, followed by Intel, which acquired Altera in 2015 for $16.7 billion, in second place.

These companies that design AI chips rely on one company for all manufacturing. Taiwan semiconductor manufacturing company.

TSMC has a huge new chip manufacturing facility in Arizona, and Apple has pledged to move some of its chip production there. In October, Nvidia CEO Jensen Huang said Blackwell GPUs were also in “full production” in Arizona.

The AI ​​chip field is crowded, but it won’t be easy for Nvidia to dethrone it.

“They’re where they are because they earned it and built it over the years,” Newman said. “They won the developer ecosystem.”

Watch the video to see how all the AI ​​chips work: https://www.cnbc.com/video/2025/11/21/nvidia-gpus-google-tpus-aws-trainium-comparing-the-top-ai-chips.html



Source link

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Editor-In-Chief
  • Website

Related Posts

Oracle stock plummets on report that Blue Owl won’t back $10 billion data center

December 17, 2025

5 things to know before the stock market opens on Wednesday

December 17, 2025

Apple has put a lot of effort into AI this year. Next year will be a crucial year

December 17, 2025
Add A Comment

Comments are closed.

News

Susie Wiles: What President Trump’s aide said in Vanity Fair interview | Donald Trump News

By Editor-In-ChiefDecember 17, 2025

US President Donald Trump’s chief of staff, who is known for working behind the scenes,…

South Africa to deport Kenyan involved in US-Afrikaner refugee program | South Africa Donald Trump News

December 17, 2025

Why did India’s exports increase by 20% despite President Trump’s trade war? | Trade War News

December 17, 2025
Top Trending

Google launches Gemini 3 Flash, making it the default model for Gemini apps

By Editor-In-ChiefDecember 17, 2025

Google today aims to steal OpenAI’s thunder with the release of its…

Skana Robotics helps underwater robot fleets communicate with each other

By Editor-In-ChiefDecember 17, 2025

Underwater autonomous ships and robots could play an important role in defense…

As circular trading continues to gain popularity, Amazon is reportedly in talks to invest $10 billion in OpenAI.

By Editor-In-ChiefDecember 17, 2025

CNBC reports that Amazon is in early talks to invest up to…

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Welcome to WhistleBuzz.com (“we,” “our,” or “us”). Your privacy is important to us. This Privacy Policy explains how we collect, use, disclose, and safeguard your information when you visit our website https://whistlebuzz.com/ (the “Site”). Please read this policy carefully to understand our views and practices regarding your personal data and how we will treat it.

Facebook X (Twitter) Instagram Pinterest YouTube

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Facebook X (Twitter) Instagram Pinterest
  • Home
  • Advertise With Us
  • Contact US
  • DMCA Policy
  • Privacy Policy
  • Terms & Conditions
  • About US
© 2025 whistlebuzz. Designed by whistlebuzz.

Type above and press Enter to search. Press Esc to cancel.