Elon Musk's next xAI data centers are expected to house millions of AI chips and consume so much power that Elon Musk has reportedly bought a power plant overseas and intends to ship it to the U.S., according to Dylan Patel from SemiAnalysis, who outlined xAI's recent progress in a podcast. Interestingly, Musk confirmed the statement in a subsequent tweet.
Elon Musk's current xAI Colossus AI supercomputer is already one of the world's most powerful and power-hungry machines on the planet, housing some 200,000 Nvidia Hopper GPUs and consuming around an astounding 300 MW of power, and xAI has faced significant headwinds in supplying it with enough power.
The challenges only become more intense as the company moves forward — Musk faces a monumental challenge with powering his next AI data center, one that is predicted to house one million AI GPUs, thus potentially consuming the same amount of power as 1.9 million households. Here's how the data center could consume that much power, and how Musk plans to deliver it.
Beyond Colossus
SemiAnalysis Dylan shares his thoughts on xAI Grok and research labs: > xAI has ALOT of compute concentrated > ALOT of great researchers > 200K GPUs already running > purchased a new factory in memphis > building out a new data center > now they are buying a power plant… pic.twitter.com/FhxxmFBw23July 1, 2025
Elon Musk's xAI has assembled vast computing resources and a team of talented researchers to advance the company's Grok AI models, Patel said. However, even bigger challenges lay ahead.
It is no secret that Elon Musk has already run into trouble powering his existing xAI data center. Currently, the company's main data center, Colossus, which houses 200,000 Nvidia Hopper GPUs, is located near Memphis, Tennessee. To power this machine, xAI installed 35 gas turbines that can produce 420 MW of power, as well as deploying Tesla Megapack systems to smooth out power draw. However, things are going to get much more serious going forward.
Beyond the Colossus buildout, xAI is rapidly acquiring and developing new facilities. The company has purchased a factory in Memphis that is being converted into additional data center space, big enough to power around 125,000 eight-way GPU servers, along with all supporting hardware, including networking, storage, and cooling.
A million Nvidia Blackwell GPUs will consume between 1,000 MW (1 GW) and 1,400 MW (1.4 GW), depending on the accelerator models (B200, GB200, B300, GB300) used and their configuration.
However, the GPUs are not the only load on the power system; you must also account for the power consumption of CPUs, DDR5 memory, storage, networking gear, cooling, air conditioning, power supply inefficiency, and other factors such as lighting. In large AI clusters, a useful approximation is that overhead adds another 30% to 50% on top of the AI GPU power draw, a figure typically expressed as PUE (power usage effectiveness).
That said, depending on which Blackwell accelerators xAI plans to use, a million-GPU data center will consume between 1,400 MW and 1,960 MW (given a PUE of 1.4). What can possibly power a data center with a million high-performance GPUs for AI training and inference is a big question, as this undertaking is comparable to powering the potential equivalent of 1.9 million homes.
A power plant?
A large-scale solar power plant alone is not viable for a 24/7 compute load of this magnitude, as one would need several gigawatts of panels, plus massive battery storage, which is prohibitively expensive and land-intensive.
The most practical and commonly used option is building multiple natural gas combined-cycle gas turbine (CCGT) plants, each capable of producing 0.5 MW – 1,500 MW. This approach is relatively fast to deploy (several years), scalable in phases, and easier to integrate with existing electrical grids. Perhaps, this is what xAI plans to import to the U.S.
Alternatives like nuclear reactors could technically meet the load with fewer units (each can produce around 1,000 MW) with no direct carbon emissions, but nuclear plants take much longer to design, permit, and build (up to 10 years). It is unlikely that Musk has managed to buy a nuclear power plant overseas, with plans to ship it to the U.S.
In practice, any organization attempting a 1.4 – 1.96 Gigawatt deployment — like xAI — will effectively become a major industrial energy buyer. For now, xAI's Colossus produces power onsite and purchases power from the grid; therefore, it is likely that the company's next data center will follow suit and combine a dedicated onsite plant with grid interconnections.
Apparently, because acquiring a power plant in the U.S. can take too long, xAI is reportedly buying a plant overseas and shipping it in, something that highlights how AI development now hinges not only on compute hardware and software but also on securing massive energy supplies quickly.
There's no other way
Without a doubt, a data center housing a million AI accelerators with a dedicated power plant appears to be an extreme measure. However, Patel points out that most leading AI companies are ultimately converging on similar strategies: concentrating enormous compute clusters, hiring top-tier researchers, and training ever-larger AI models. To that end, if xAI plans to stay ahead of the competition, it needs to build even more advanced and power-hungry data centers.
Follow Tom's Hardware on Google News to get our up-to-date news, analysis, and reviews in your feeds. Make sure to click the Follow button.