Meta, the parent company of Facebook, Instagram, and WhatsApp, has quietly begun testing its first in-house chip designed to train artificial intelligence (AI) models. According to sources close to the matter, this move marks a major step in Meta's effort to create custom silicon and reduce its reliance on external chip suppliers like Nvidia.

From Testing to Scaling Up

The chip, currently in a limited test rollout, could see wider production if the initial results are promising. It's all part of Meta's broader strategy to take more control over its infrastructure, especially as it doubles down on AI to fuel its future growth.

Slashing Costs in the AI Arms Race

Meta's investment in AI isn't cheap. The company has projected expenses between US$114 billion and US$119 billion for 2025, with as much as US$65 billion earmarked for capital expenditures—much of that going into AI infrastructure. Creating its own chips could significantly help cut long-term costs.

A Chip Built for AI

One source described the new chip as a "dedicated accelerator," meaning it's designed specifically for AI training tasks rather than general-purpose computing. These chips tend to be more efficient than traditional GPUs, which are commonly used for AI workloads.

Meta is reportedly working with Taiwan's TSMC, one of the world's leading chip manufacturers, to produce the new chip.

The Long Road to Tape-Out

This test phase began after Meta completed its first "tape-out"—a key milestone in chip development where the design is sent for physical manufacturing. Tape-outs are expensive and time-consuming, with no guarantee of success. If something goes wrong, Meta would need to go back to the drawing board and repeat the process.

Both Meta and TSMC have declined to comment on the chip project.

Part of the MTIA Series

This new chip is part of Meta's MTIA (Meta Training and Inference Accelerator) initiative. The program has had its share of false starts—in fact, Meta previously scrapped a similar chip during a comparable development stage.

Still, progress has been made. Last year, Meta deployed an earlier MTIA chip to help with inference tasks, specifically for recommendation engines that power the content users see on Facebook and Instagram.

Training First, Then Generative AI

Meta's ultimate goal is to use its chips not just for inference, but also for the more demanding process of AI training—teaching models how to perform by feeding them massive amounts of data. The company aims to begin training with its chips on recommendation systems first, and later expand into generative AI tools like its Meta AI chatbot.

"We're figuring out how to train recommender systems first, and eventually want to handle both training and inference for generative AI," said Chris Cox, Meta's Chief Product Officer, during a recent tech conference. He called the development process a "walk, crawl, run" situation but said the company considers its initial inference chip a major success.

A Rocky History with In-House Chips

This isn't Meta's first attempt at creating its own chip. A previous custom inference chip failed during testing, prompting Meta to pivot and invest heavily in Nvidia GPUs in 2022 instead. Since then, Meta has become one of Nvidia's largest customers, using the company's powerful chips to support everything from ad systems to AI models like Llama.

The Bigger Picture: Is Bigger Better?

Interestingly, the value of Nvidia's dominance has come under scrutiny recently. Some AI researchers are questioning whether throwing more data and computing power at large language models is still the best path forward.

That skepticism grew louder after Chinese AI startup DeepSeek launched a new series of more efficient models in January, which rely more on smart inference strategies rather than massive training runs. The result? A dip in AI stocks, with Nvidia's shares temporarily dropping by as much as 20%, before partially recovering.

Still in the Game

Despite the challenges, Nvidia remains the go-to for high-performance AI chips. But if Meta's custom chip testing succeeds, it could mark the beginning of a shift—both in how tech giants approach AI infrastructure and in the broader competition for dominance in the AI era.