Serving tech enthusiasts for over 25 years.
TechSpot means tech analysis and advice you can trust.
Microsoft's push into custom artificial intelligence hardware has hit a serious snag. Its next-generation Maia chip, code-named Braga, won't enter mass production until 2026 – at least six months behind schedule. The Information reports that the delay raises fresh doubts about Microsoft's ability to challenge Nvidia's dominance in the AI chip market and underscores the steep technical and organizational hurdles of building competitive silicon.
Microsoft launched its chip program to reduce its heavy reliance on Nvidia's high-performance GPUs, which power most AI data centers worldwide. Like cloud rivals Amazon and Google, it has invested heavily in custom silicon for AI workloads. However, the latest delay means Braga will likely lag behind Nvidia's Blackwell chips in performance by the time it ships, widening the gap between the two companies.
The Braga chip's development has faced numerous setbacks. Sources familiar with the project told The Information that unexpected design changes, staffing shortages, and high turnover have repeatedly delayed the timeline.
One setback came when OpenAI, a key Microsoft partner, requested new features late in development. These changes reportedly destabilized the chip during simulations, causing further delays. Meanwhile, pressure to meet deadlines has driven significant attrition, with some teams losing up to 20 percent of their members.
The Maia series, including Braga, reflects Microsoft's push to vertically integrate its AI infrastructure by designing chips tailored for Azure cloud workloads. Announced in late 2023, the Maia 100 uses advanced 5-nanometer technology and features custom rack-level power management and liquid cooling to manage AI's intense thermal demands.
Microsoft optimized the chips for inference, not the more demanding training phase. That design choice aligns with the company's plan to deploy them in data centers powering services like Copilot and Azure OpenAI. However, the Maia 100 has seen limited use beyond internal testing because Microsoft designed it before the recent surge in generative AI and large language models.
"What's the point of building an ASIC if it's not going to be better than the one you can buy?" – Nividia CEO Jensen Huang
In contrast, Nvidia's Blackwell chips, which began rolling out in late 2024, are designed for both training and inference at a massive scale. Featuring over 200 billion transistors and built on a custom TSMC process, these chips deliver exceptional speed and energy efficiency. This technological advantage has solidified Nvidia's position as the preferred supplier for AI infrastructure worldwide.
The stakes in the AI chip race are high. Microsoft's delay means Azure customers will rely on Nvidia hardware longer, potentially driving up costs and limiting Microsoft's ability to differentiate its cloud services. Meanwhile, Amazon and Google are progressing with silicon designs as Amazon's Trainium 3 and Google's seventh-generation Tensor Processing Units gain traction in data centers.
Team Green, for its part, appears unfazed by the competition. Nvidia CEO Jensen Huang recently acknowledged that major tech companies are investing in custom AI chips but questioned the rationale for doing so if Nvidia's products already set the standard for performance and efficiency.