Meta’s 5 GW Hyperion data center under construction in Richland Parish, Louisiana, January 9, 2026.
Courtesy of Meta
Meta on Wednesday announced four custom in-house chips tailored for artificial intelligence-related tasks as part of the company’s massive data center expansion plans.
This specialized silicon is part of the Meta Training and Inference Accelerator (MTIA) family of chips, first made available to the public in 2023, with a second-generation version announced in 2024.
Yee Jiun Song, Meta’s vice president of engineering, told CNBC that it will be manufactured by designing a custom chip. taiwan semiconductorthe social media giant can drive up the price per performance across its data center fleet, rather than relying solely on vendors.
“This also increases the diversity of silicon supply and provides some immunity from price fluctuations,” Song said. “This is something with a little more leverage.”
The first new chip, the MTIA 300, was introduced a few weeks ago and is intended to help train small AI models that power Meta’s core ranking and recommendation tasks, Song said. These types of tasks include showing people relevant content and online ads within the company’s family of apps, including Facebook and Instagram.
Upcoming chips (MTIA 400, MTIA 450, MTIA 500) are aimed at more cutting-edge generative AI-related inference tasks, such as creating images and videos based on prompts written by people. Song said the chip will not be used to train huge, large-scale language models.
One Meta data center rack is equipped with 72 of Meta’s in-house MTIA 400 chips, which are optimized to accelerate AI inference. MTIA 400 has completed its testing phase and will be deployed to Meta Data Centers soon.
Provided by: Meta
Meta said in a blog post that it has finished testing the MTIA 400 and is “proceeding toward deployment in data centers,” while the other two chips are expected to be operational in 2027.
“It’s unusual for a silicon company or team to release a new chip every six months. It’s a very fast pace,” Song said. “A big reason for that is because we’re building capacity so quickly right now and we’re spending a lot of money on capital investment that we want to make sure we have cutting-edge chips ready to deploy.”
Song said the company expects the chips to have a “typical service life of five years or more.”
Meta’s growing AI spending includes a large data center in Louisiana, as well as two large data centers in Ohio and Indiana. Bloomberg reports that Meta is also considering leasing space at the Stargate site in Texas after OpenAI and Oracle canceled plans to expand their AI data center site.
Tech giants like Google have been developing their own silicon to fill data centers in recent years, seeking an alternative to expensive and supply-constrained GPUs. Nvidia and AMD.
These hyperscalers are creating so-called application-specific integrated circuits (ASICs). It is smaller and cheaper than the flagship GPU for general-purpose AI, but is limited to performing a narrower range of tasks.
google first worked on ASIC games and released its first Tensor Processing Unit in 2015. Amazon Next, the first custom chip was announced in 2018. While these big tech companies include their AI chips as part of their respective cloud computing platforms for customers to access, Meta’s MTIA chips are used entirely for internal purposes.
Meta’s next-generation MTIA 400 custom accelerator has completed its testing phase and will soon be deployed in Meta data centers.
Provided by: Meta
Future MTIA chips will be equipped with more high-bandwidth memory (HBM) to power GenAI-related inference tasks.
The technology industry’s mega-AI push is creating a market-wide shortage of memory chips, meaning Meta’s ambitious silicon roadmap could face future supply chain constraints.
“We are absolutely worried about HBM’s supply,” Song said. “But we believe we have secured supply for what we plan to build.”
Memory is typically a cyclical business, with chip manufacturers securing goods from suppliers such as: samsung, SK Hynix and micron on a short-term contract.
Song did not comment on whether the company has long-term contracts with memory vendors to prevent shortages, but said Meta is taking a “multi-pronged” approach to its supply chain and silicon strategy.
In recent weeks, Meta has signed a multi-year deal to equip its data centers with millions of Nvidia GPUs and up to 6 gigawatts of AMD GPUs.
“Workloads are changing rapidly, so we want to make sure we have options,” Song said, referring to chip trading.
Meta’s new in-house chip is Taiwan cicadaoperates primarily outside Taiwan and has a large new chip manufacturing campus in Arizona.
Mehta declined to comment on whether the chips would be manufactured in Arizona.
Most of Meta’s “substantive team” of several hundred engineers who worked on the silicon are based in the United States, Song said. Meta has a total of 30 operational and planned data centers, 26 of which are located in the United States.
WATCH: A deep dive into AI chips, from Nvidia GPUs to ASICs from Google and Amazon.

