Sam Altman teases 100 million GPU scale for OpenAI that could cost $3 trillion — ChatGPT maker to cross 'well over 1 million' by end of year

9 hours ago 4
Sam Altman
(Image credit: Getty / Bloomberg)

OpenAI CEO Sam Altman isn’t exactly known for thinking small, but his latest comments push the boundaries of even his usual brand of audacious tech talk. In a new post on X, Altman revealed that OpenAI is on track to bring “well over 1 million GPUs online” by the end of this year. That alone is an astonishing number—consider that Elon Musk’s xAI, which made waves earlier this year with its Grok 4 model, runs on about 200,000 Nvidia H100 GPUs. OpenAI will have five times that power, and it’s still not enough for Altman going into the future. “Very proud of the team...” he wrote, “but now they better get to work figuring out how to 100x that lol.”

we will cross well over 1 million GPUs brought online by the end of this year!very proud of the team but now they better get to work figuring out how to 100x that lolJuly 20, 2025

The “lol” might make it sound like he’s joking, but Altman’s track record suggests otherwise. Back in February, he admitted that OpenAI had to slow the rollout of GPT‑4.5 because they were literally “out of GPUs.” That wasn’t just a minor hiccup; it was a wake-up call considering Nvidia is also sold out till next year for its premier AI hardware. Altman has since made compute scaling a top priority, pursuing partnerships and infrastructure projects that look more like national-scale operations than corporate IT upgrades. When OpenAI hits its 1 million GPU milestone later this year, it won’t just be a social media flex—it’ll be cementing itself as the single largest consumer of AI compute on the planet.

Anyhow, let’s talk about that 100x goal, because it’s exactly as wild as it sounds. At current market prices, 100 million GPUs would cost around $3 trillion—almost the GDP of the UK—and that’s before factoring in the power requirements or the data centers needed to house them. There’s no way Nvidia could even produce that many chips in the near term, let alone handle the energy requirements to power them all. Yet, that’s the kind of moonshot thinking that drives Altman. It’s less about a literal target and more about laying down the foundation for AGI (Artificial General Intelligence), whether that means custom silicon, exotic new architectures, or something we haven’t even seen yet. OpenAI clearly wants to find out.

The biggest living proof of this is OpenAI’s Texas data center, now the world’s largest single facility, which consumes around 300 MW—enough to power a mid-sized city—and is set to hit 1 gigawatt by mid-2026. Such massive and unpredictable energy demands are already drawing scrutiny from Texas grid operators, who warn that stabilizing voltage and frequency for a site of this scale requires costly, rapid infrastructure upgrades that even state utilities struggle to match. Regardless, innovation must prevail, and the bubble shouldn't burst.

Fun math:100,000,000 GPUs × $30,000 per GPU = $3,000,000,000,000$3 trillionJuly 20, 2025

The company isn’t just hoarding NVIDIA hardware, either. While Microsoft’s Azure remains its primary cloud backbone, OpenAI has partnered with Oracle to build its own data centers and is rumored to be exploring Google’s TPU accelerators to diversify its compute stack. It’s part of a larger arms race, where everyone from Meta to Amazon is building in-house AI chips and betting big on high-bandwidth memory (HBM) to keep these monster models fed. Altman, for his part, has hinted at OpenAI’s own custom chip plans, which would make sense given the company’s growing scale.

Altman’s comments also double as a not-so-subtle reminder of how quickly this field moves. A year ago, a company boasting 10,000 GPUs sounded like a heavyweight contender. Now, even 1 million feels like just another stepping stone toward something much bigger. OpenAI’s infrastructure push isn’t just about faster training or smoother model rollouts; it’s about securing a long-term advantage in an industry where compute is the ultimate bottleneck. And, of course, Nvidia would be more than happy to provide the building blocks.

Is 100 million GPUs realistic? Not today, not without breakthroughs in manufacturing, energy efficiency, and cost. But that’s the point. Altman’s vision isn’t bound by what’s available now but rather aimed at what’s possible next. The 1 million GPUs coming online by year’s end are a real catalyst for marking a new baseline for AI infrastructure, one that seems to be diversifying by the day. Everything beyond that is ambition, and if Altman’s history is any guide, it might be foolish to dismiss it as mere hype.

Get Tom's Hardware's best news and in-depth reviews, straight to your inbox.

Follow Tom's Hardware on Google News to get our up-to-date news, analysis, and reviews in your feeds. Make sure to click the Follow button.

Hassam Nasir is a die-hard hardware enthusiast with years of experience as a tech editor and writer, focusing on detailed CPU comparisons and general hardware news. When he’s not working, you’ll find him bending tubes for his ever-evolving custom water-loop gaming rig or benchmarking the latest CPUs and GPUs just for fun.

Read Entire Article