By Dr. Tim Sandle
SCIENCE EDITOR
DIGITAL JOURNAL
March 22, 2026
World's richest man Elon Musk is on track to become the world's first trillionaire - Copyright GETTY IMAGES NORTH AMERICA/AFP/File Christopher Furlong
A report on AI supercomputers indicates that Elon Musk’s xAI Colossus is the most powerful AI machine on the planet. No company has invested more heavily in AI computing than Meta, which owns three of the top 10 AI supercomputers.
With artificial intelligence now consuming as much electricity as mid-sized cities, the report, data infrastructure provider TRG Datacenters, reveals the world’s most powerful AI infrastructures and the companies behind them.
To compare supercomputers built with different hardware, the research converted every system’s chips into H100 equivalents, a common unit based on NVIDIA’s most widely used AI processor. Alongside raw computing power, the report also calculated how much electricity each cluster consumes per 10K H100 equivalents, showing which systems deliver more computing output per megawatt of power. The study is based on Epoch AI’s global GPU cluster database.
The 10 most powerful AI supercomputers in the world
| Name | H100 equivalents | Owner | Country | MW per 10k H100-eq | Power (MW) |
| xAI Colossus Memphis Phase 3 | 275,796 | xAI | United States of America | 12.78 | 352.40 |
| Meta 100k | 100,000 | Meta AI | United States of America | 14.27 | 142.70 |
| OpenAI/Microsoft Goodyear Arizona | 100,000 | Microsoft,OpenAI | United States of America | 14.27 | 142.70 |
| Oracle OCI Supercluster H200s | 65,536 | Oracle | United States of America | 14.27 | 93.50 |
| Tesla Cortex Phase 1 | 50,000 | Tesla | United States of America | 14.27 | 71.30 |
| Lawrence Livermore NL El Capitan Phase 2 | 44,143 | US Department of Energy | United States of America | 7.93 | 35.00 |
| CoreWeave H200s | 42,000 | CoreWeave | United States of America | 14.27 | 59.90 |
| Meta GenAI 2024b | 24,576 | Meta AI | United States of America | 14.27 | 35.10 |
| Meta GenAI 2024a | 24,576 | Meta AI | United States of America | 14.27 | 35.10 |
| “Jupiter, Jülich” | 23,536 | EuroHPC JU,Julich Supercomputing Center | Germany | 7.65 | 18.00 |
xAI Colossus in Memphis is the world’s most powerful AI supercomputer by a wide margin. Its 275K+ H100-equivalent chips are nearly three times what Meta and OpenAI/Microsoft could put together in their own clusters. The whole system runs on 352.4 MW of power, about as much electricity as a city of 250K people uses on a typical day. And unlike most commercial clusters in the ranking, Colossus uses less power for every unit of compute, at 12.78 MW per 10K H100 equivalents.
Meta’s 100K cluster comes in second with exactly 100K H100 equivalents, one of two systems to reach that figure. The cluster needs 142.7 MW to operate, about 60% less than Colossus, although this is because it’s a third of its size in computing terms. Meta built this system specifically to train its Llama family of AI models, and its scale shows how aggressively the company has been pushing into the AI race.
The OpenAI and Microsoft joint cluster in Goodyear, Arizona, ties with Meta for second place in raw computing power at 100K H100 equivalents. It also requires the same 142.7 MW of power, which makes it identical to Meta’s system on paper. In practice, though, this facility is the physical infrastructure behind the supercomputing partnership that brought OpenAI’s ChatGPT and Microsoft’s Copilot to billions of users.
Oracle’s H200 supercluster ranks fourth with 65K+ H100 equivalents. This system is actually built on NVIDIA’s newer H200 chips, which offer faster memory speeds than the standard H100. The cluster consumes 93.5 MW of power and runs at the same 14.27 MW efficiency ratio as the two Meta and Microsoft systems above it. Oracle has been quietly building out one of the largest AI cloud infrastructures in the world, and this cluster represents its most powerful system.
Tesla’s Cortex cluster rounds out the top five with 50K H100 equivalents and a 71.3 MW power usage. Unlike every other system in the top five, Cortex wasn’t built to sell AI computing to other companies. Tesla runs it entirely in-house to train its Full Self-Driving software, processing billions of miles of real-world driving footage to teach its cars how to move on roads.
The top 20 clusters in the ranking together need over 1,200 megawatts just to stay on. By adding in cooling, this adds 30 to 50 percent on top of the base power usage.
The OpenAI and Microsoft joint cluster in Goodyear, Arizona, ties with Meta for second place in raw computing power at 100K H100 equivalents. It also requires the same 142.7 MW of power, which makes it identical to Meta’s system on paper. In practice, though, this facility is the physical infrastructure behind the supercomputing partnership that brought OpenAI’s ChatGPT and Microsoft’s Copilot to billions of users.
Oracle’s H200 supercluster ranks fourth with 65K+ H100 equivalents. This system is actually built on NVIDIA’s newer H200 chips, which offer faster memory speeds than the standard H100. The cluster consumes 93.5 MW of power and runs at the same 14.27 MW efficiency ratio as the two Meta and Microsoft systems above it. Oracle has been quietly building out one of the largest AI cloud infrastructures in the world, and this cluster represents its most powerful system.
Tesla’s Cortex cluster rounds out the top five with 50K H100 equivalents and a 71.3 MW power usage. Unlike every other system in the top five, Cortex wasn’t built to sell AI computing to other companies. Tesla runs it entirely in-house to train its Full Self-Driving software, processing billions of miles of real-world driving footage to teach its cars how to move on roads.
The top 20 clusters in the ranking together need over 1,200 megawatts just to stay on. By adding in cooling, this adds 30 to 50 percent on top of the base power usage.
No comments:
Post a Comment