Eugene Mimlin | Moments | Getty Images
All computing devices require a component called memory (RAM) for short-term data storage, but this year there won’t be enough of these essential components to meet global demand.
Because companies like it Nvidia, advanced micro device and google Artificial intelligence chips require a lot of RAM, and these companies are among the first to develop the components.
Three major memory vendors — micronSK Hynix, and Samsung Electronics – account for almost the entire RAM market, and their businesses are benefiting from the surge in demand.
“We’re seeing a very rapid and significant spike in memory demand that far exceeds our memory supply capacity, and by our estimation, the memory industry’s overall capacity,” Sumit Sadhana, Micron’s head of business, told CNBC at the CES trade show in Las Vegas this week.
Micron’s stock price has risen 247% over the past year, and the company reported that its net income nearly tripled in its most recent quarter. Samsung also announced this week that it expects its operating profit to nearly triple in the December quarter. Meanwhile, SK Hynix is considering listing in the U.S. due to the soaring stock price in South Korea, and the company announced in October that it had secured demand for its entire RAM production capacity for 2026.
Memory prices are currently skyrocketing.
TrendForce, a Taipei-based company that closely tracks the memory market, said this week that it expects the average price of DRAM memory to rise by 50% to 55% this quarter compared to the fourth quarter of 2025. TrendForce analyst Tom Hsu told CNBC that this rise in memory prices is “unprecedented.”
3 to 1 base
Sadana said chipmakers like Nvidia surround the part of the chip that does the calculations (the graphics processing unit, or GPU) with several blocks of a fast, specialized component called high-bandwidth memory (HBM). HBM is often seen when chip manufacturers flash new chips. Micron supplies memory to both Nvidia and AMD, two major GPU manufacturers.
Nvidia’s Rubin GPUs, which recently entered production, feature up to 288 GB of next-generation HBM4 memory per chip. The HBM is attached to eight visible blocks above and below the processor, and its GPUs are sold as part of a single server rack called NVL72, which neatly combines 72 of these GPUs into a single system. By comparison, smartphones typically come with 8 GB or 12 GB of low-power DDR memory.
Nvidia founder and CEO Jensen Huang spoke at Nvidia Live at CES 2026 to introduce Rubin GPUs and Vera CPUs ahead of the annual Consumer Electronics Show in Las Vegas, Nevada on January 5, 2026.
Patrick T. Fallon | AFP | Getty Images
However, the HBM memory required for AI chips is much more demanding than the RAM used in consumer laptops and smartphones. HBM is designed to meet the high-bandwidth specifications required for AI chips and is manufactured using a complex process in which Micron stacks 12 to 16 layers of memory on a single chip, creating a “cube.”
If Micron manufactures 1-bit HBM memory, it must refrain from manufacturing 3-bit conventional memory for other devices.
“Increasing the supply of HBM leaves less memory in the non-HBM portion of the market due to the 3:1 criterion,” Sadana said.
TrendForce analyst Su said memory manufacturers prefer server and HBM applications over other customers because demand is likely to grow as businesses and cloud service providers are less price sensitive.
Micron announced in December that it would discontinue parts of its business aimed at providing memory to consumer PC makers to conserve supplies for AI chips and servers.
Some in the tech industry are surprised by how quickly consumer RAM prices have risen.
Dean Beeler, co-founder and head of technology at Juice Labs, said a few months ago that the company installed 256GB of RAM in its computers, the maximum amount supported by modern consumer motherboards. The cost at the time was about $300.
“Who would have thought that just a few months later I would end up needing $3,000 worth of RAM,” he posted on Facebook on Monday.
“Memory Wall”
Just before OpenAI’s ChatGPT hit the market in late 2022, AI researchers started thinking that memory was the bottleneck, said Sha Rabii, a co-founder of Majestic Labs and an entrepreneur who previously worked in silicon development at Google. meta.
Previous AI systems were designed for models like convolutional neural networks, which required less memory than the large-scale language models (LLMs) that are popular today, Laby says.
He said that while the AI chips themselves have gotten much faster, the memory hasn’t, meaning a powerful GPU is waiting to get the data needed to run the LLM.
“Performance is limited by the amount and speed of memory installed. Keeping adding GPUs is not a win,” Rabii says.
The AI industry calls this the “memory wall.”
Eric Isaacson | Digital Vision | Getty Images
“The processor spends a lot of time twiddling its thumbs and waiting for data,” says Micron’s Sadhana.
More memory and faster speeds mean AI systems can run larger models, serve more customers simultaneously, and add “context windows” that allow chatbots and other LLMs to remember previous conversations with users. This adds personalization to the experience.
Majestic Labs is designing an AI system for inference with 128 terabytes of memory, or about 100 times more memory than some current AI systems, Laby said, adding that the company plans to move away from HBM memory as a lower-cost option. Rabii said the additional RAM and architectural support in the design will allow the company’s computers to support significantly more users simultaneously than other AI servers while consuming less power.
Sales end towards 2026
Wall Street has been asking companies in the consumer electronics industry the following questions: apple and Dell Technologieshow to deal with memory shortages, and whether they might be forced to raise prices or cut profit margins. These days, Sue says, memory accounts for about 20% of a laptop’s hardware cost. This is up from 10-18% in the first half of 2025.
In October, Apple’s finance chief Kevan Parekh told analysts that the company was seeing a “slight tailwind” in memory prices, but he downplayed it as “nothing to write home about.”
However, Dell said in November that it expected costs for all its products to rise as a result of the memory shortage. COO Jeffrey Clark told analysts that Dell plans to change its mix of configurations to minimize the impact on pricing, but said the shortage will likely impact the retail price of the devices.
“I don’t see how this hasn’t caught on with the customer base,” Clark said. “We will do everything we can to mitigate that.”
Even Nvidia, which has emerged as the biggest customer in the HBM market, faces questions about its voracious memory demands, especially for consumer products.
At a Tuesday press conference at CES, Nvidia CEO Jensen Huang was asked whether the company’s gaming customers might resent AI technology as prices for game consoles and graphics cards rise due to memory shortages.
Huang said NVIDIA is a very large memory customer and has long relationships with companies in this space, but the need for AI is so great that it will eventually need to build more memory factories.
“The demand is so high that all factories, all HBM suppliers are preparing and everything is on track,” Huang said.
Sadana said Micron can meet at most two-thirds of some customers’ mid-term memory requirements. But the company is currently building two large factories, called fabs, in Boise, Idaho, and plans to start producing memory in 2027 and 2028. Micron also plans to break ground on a factory in the town of Clay, New York, which is expected to be operational in 2030, he said.
But for now, “we’re sold out for 2026,” Sadana said.
