OpenAI and Broadcom to co-develop 10GW of custom AI chips in yet another blockbuster AI partnership — deployments start in 2026

7 hours ago 11
Broadcom
(Image credit: Shutterstock)

OpenAI has signed a multi-year deal with Broadcom to co-develop and deploy 10 gigawatts of custom AI accelerators and rack systems, the companies announced on October 13. OpenAI will handle accelerator and system design, while Broadcom leads development and roll-out starting in the second half of 2026. Full deployment is targeted by the end of 2029.

The agreement forms part of an ongoing, aggressive hardware push by OpenAI. Unlike with its current reliance on Nvidia GPUs, the new systems will be based on in-house accelerators paired with Broadcom’s networking and hardware IP. The deal could mark a shift away from traditional GPU-centric clusters in favor of tightly integrated silicon tailored to OpenAI’s training and inference workloads.

The new agreement adds to OpenAI’s existing partnerships with Nvidia and AMD, bringing the company’s total hardware commitments to an estimated 26 gigawatts, including roughly 10 gigawatts of Nvidia infrastructure and an undisclosed slice of AMD’s upcoming MI series.

Get Tom's Hardware's best news and in-depth reviews, straight to your inbox.

Follow Tom's Hardware on Google News, or add us as a preferred source, to get our latest news, analysis, & reviews in your feeds.

Google Preferred Source

Luke James is a freelance writer and journalist.  Although his background is in legal, he has a personal interest in all things tech, especially hardware and microelectronics, and anything regulatory. 

Read Entire Article