OpenAI couldn’t finance its data centers, so it took control of the hardware instead — company's chip design aspirations lag behind Google and Amazon

3 hours ago 4
Sam Altman looking pensive (Image credit: Getty Images / Bloomberg)

OpenAI spent much of 2025 trying to build its own AI data centers, only to find that it couldn’t secure financing on competitive terms. According to a report by The Information, that failure set off a cascading chain of negotiations and compromises that ultimately redirected ambitions down the stack. Rather than owning physical real estate, OpenAI reportedly pivoted to control what goes inside them while simultaneously assembling one of the most aggressive multi-vendor chip procurement strategies in the industry.

If you can’t build it, rent it

This is understood to have begun immediately following the White House’s Stargate announcement in January 2025. OpenAI employees fanned out across the country, scouting potential sites capable of supporting campuses between 800 megawatts and 1.2 gigawatts each, prioritizing locations where significant power would come online in 2026 and 2027. Executives reportedly floated spinning Stargate out as a separate entity that would construct facilities and lease them back to OpenAI, with some discussing using it purely as a financial vehicle to raise capital for chips and infrastructure.

Ultimately, none of this came to pass. According to sources cited by The Information, when OpenAI ran the numbers, it became clear the company would pay a significant premium to secure financing on its own. Lenders reportedly offered materially better terms when a more creditworthy tenant, like Oracle, signed the lease instead.

A blockbuster deal between Oracle and OpenAI followed this, with the duo agreeing to develop 4.5 gigawatts of data center capacity across multiple U.S. sites, with the two companies allegedly sharing economic risk on cost overruns and savings — a small but important detail that had not previously been publicly disclosed.

The Texas compromise

OpenAi Dev Day, Nov 7

(Image credit: OpenAI)

One particular site in Texas — a 1 GW facility in Milam County — was of particular interest to OpenAI. The company had reportedly hoped that the site would become its first self-built data center, while SoftBank, the other principal Stargate partner, wanted to develop and own it outright. Between September and October 2025, OpenAI's team made multiple trips to Japan to negotiate directly with SoftBank's Masayoshi Son, with talks reportedly stretching for hours across multiple sessions.

These meetings led to a compromise announced on January 9 of this year, when OpenAI and SoftBank each invested $500 million into SB Energy, with OpenAI selecting SB Energy to build and operate the Milam County campus. According to law firm Sullivan & Cromwell, which advised OpenAI, said in a statement that the deal “brings together OpenAI's first-party data center design with SB Energy's proven expertise in speed, cost discipline, and integrated energy delivery,” while OpenAI’s president Greg Brockman described the arrangement as combining SB Energy's "strength in data center infrastructure and energy development" with "OpenAI's deep domain expertise in data center engineering — in other words, SoftBank builds and owns the project, while OpenAI controls design.

According to The Information, design control covers cluster architecture, cooling systems, rack configs, and power infrastructure, four categories that together determine every meaningful hardware decision made inside a facility.

Control over cluster architecture means that it’s OpenAI, not SoftBank, that decides how GPUs or custom accelerators are grouped, how many form a single training or inference unit, and how they’re interconnected. So, while OpenAI doesn’t own the land or the physical building, it does have full say over all hardware decisions — that is, no doubt, what OpenAI wanted in the first place, even if the compromise meant that it doesn’t “own” the project on paper.

Late to the party

OpenAI

(Image credit: OpenAI)

OpenAI has assembled a substantial silicon strategy since the Texas deal/compromise, with most of it formally confirmed. In September, OpenAI and Nvidia announced a letter of intent to deploy at least 10 gigawatts of Nvidia systems, with Nvidia intending to invest up to $100 billion in OpenAI as milestones are hit, with the first gigawatt targeting the second half of 2026 on the Vera Rubin platform.

That arrangement has since evolved: Nvidia is now reportedly moving toward a $30 billion direct equity stake in OpenAI, not tied to deployment milestones — as part of OpenAI's current funding round at a $730 billion pre-money valuation. As of December, Nvidia's CFO confirmed the definitive agreement had not yet been completed — and uncertainties remain — with OpenAI's purchases still flowing indirectly through cloud partners like Microsoft and Oracle. The actual figures, in other words, remain a work in progress.

Then there’s AMD, with whom OpenAI announced a definitive agreement in October. This covers 6 gigawatts of AMD Instinct GPUs, beginning with the MI450 series in the second half of 2026, with AMD issuing OpenAI a warrant for up to 160 million shares that vests as deployment milestones are reached. A week later, on October 13, OpenAI and Broadcom announced a term sheet covering 10 gigawatts of OpenAI-designed custom AI accelerators, with racks "scaled entirely with Ethernet and other connectivity solutions from Broadcom,” and in January, a confirmed $10 billion deal with Nvidia challenger Cerebras locked in 750 megawatts of Wafer Scale Engine 3 capacity through 2028 for low-latency inference workloads.

This split between training and inference makes sense because Nvidia’s GPU ecosystem remains extremely difficult to displace for large-scale model training, where CUDA's maturity introduces switching costs that can’t be eliminated. That’s not the case for inference, where Cerebras' wafer-scale architecture eliminates the inter-chip communication latency that constrains GPU clusters for latency-sensitive tasks, and custom ASICs highlight the same cost considerations that Google, Amazon, Meta, and Microsoft have already faced: at sufficient scale, the upfront cost of chip design is dwarfed by per-unit savings across hundreds of thousands of chips. Amazon, for example, claims 30% to 40% cost savings on specific workloads using Trainium versus equivalent Nvidia hardware.

The caveat is that OpenAI is arriving at this realization considerably later than its peers. Google began TPU development in 2013. Amazon launched Inferentia in 2018. Microsoft began its Maia program around 2019. Every company on that list will note that it’s not the chip, but the software stack that takes years to mature, and OpenAI is beginning that process now.

In November 2025, OpenAI confirmed the hire of Intel’s former chief technology and AI officer, Sachin Katti, to lead its infrastructure organization. According to The Information, his mandate is to develop OpenAI's data center intellectual property so future deals are built around the company's own hardware requirements. He reportedly oversees chip selection and the full compute roadmap, with the heads of data centers and industrial compute now reporting to him.

So, while it’s true that OpenAI still doesn’t own a single data center, it does have design authority over every campus it occupies, a confirmed custom accelerator program, production deployments on Cerebras hardware, and a hardware executive whose job is to close the gap, following the same path all the other hyperscalers have walked.

Luke James is a freelance writer and journalist.  Although his background is in legal, he has a personal interest in all things tech, especially hardware and microelectronics, and anything regulatory. 

Read Entire Article