OpenAI and Broadcom to co-develop 10GW of custom AI chips in yet another blockbuster AI partnership — deployments start in 2026
7 hours ago
11
(Image credit: Shutterstock)
OpenAI has signed a multi-year deal with Broadcom to co-develop and deploy 10 gigawatts of custom AI accelerators and rack systems, the companies announced on October 13. OpenAI will handle accelerator and system design, while Broadcom leads development and roll-out starting in the second half of 2026. Full deployment is targeted by the end of 2029.
The agreement forms part of an ongoing, aggressive hardware push by OpenAI. Unlike with its current reliance on Nvidia GPUs, the new systems will be based on in-house accelerators paired with Broadcom’s networking and hardware IP. The deal could mark a shift away from traditional GPU-centric clusters in favor of tightly integrated silicon tailored to OpenAI’s training and inference workloads.
The two companies have already been working together for over 18 months, and this formal agreement builds on that collaboration. Few technical details have been disclosed, but the joint announcement confirms that the systems will use Ethernet-based networking, suggesting a data-center architecture designed for scalability and vendor neutrality. OpenAI says deployments will be phased over several years, with the first racks going online in the second half of 2026.
The new agreement adds to OpenAI’s existing partnerships with Nvidia and AMD, bringing the company’s total hardware commitments to an estimated 26 gigawatts, including roughly 10 gigawatts of Nvidia infrastructure and an undisclosed slice of AMD’s upcoming MI series.
Interestingly, OpenAI is not believed to be Broadcom’s still-unknown $10 billion customer. Speaking with CNBC, Broadcom semiconductor president Charlie Kawwas appeared alongside OpenAI’s Greg Brockman and joked, “I would love to take a $10 billion [purchase order] from my good friend Greg,” he said, adding, “He has not given me that PO yet.” WSJ reports that the deal is worth "multiple billions of dollars."
OpenAI stands to gain a deep bench in ASIC design and proven supply chain maturity from Broadcom. The company already produces custom AI silicon for hyperscale customers, including Google’s TPU infrastructure. By leveraging Broadcom’s Ethernet and chiplet IP, OpenAI gets a path to differentiated hardware without building a silicon team from scratch.
Meanwhile, for Nvidia, the deal adds to a growing list of partial defections among major AI customers exploring in-house silicon. Amazon, Google, Meta, and Microsoft are all now pursuing custom accelerators. What remains to be seen is how well these bespoke solutions perform at scale, and whether vendors like Broadcom can match the ecosystem maturity of CUDA.
Get Tom's Hardware's best news and in-depth reviews, straight to your inbox.
Neither company has disclosed foundry partners, packaging flows, or memory choices for the upcoming accelerators. Those decisions will shape delivery timelines just as much as wafer capacity. With deployment still a year out, the clock is ticking.
Luke James is a freelance writer and journalist. Although his background is in legal, he has a personal interest in all things tech, especially hardware and microelectronics, and anything regulatory.