Off The Planet
SpaceX’s billion-dollar plan to outrun the "Magical Electricity Fairies"
The Great Wall of China is not one wall. It’s a collection of walls built by different Chinese states and dynasties over roughly 2,000 years. Many sections don’t connect at all, and some run parallel to each other. The idea of a single continuous wall is a modern myth.
Another myth: you can’t see it from space. The wall is long but only 15 to 30 feet wide. Far too narrow to be seen from orbit with the naked eye. Chinese astronaut Yang Liwei confirmed he couldn’t spot it when he went to space in 2003.
Now for the smooth transition you’ve been waiting for. The Great Wall and space-based AI data centers have something in common: both are hard for people to conceptualize. For the Great Wall it explains why the “visible from space” myth exists in the first place. Describing it that way gives people an instant sense of its scale (it’s 13,171 miles long 🤯).
Space-based AI data centers have the same problem. Nobody has a ready mental model, which makes it hard to evaluate whether we need them, or whether they’ll actually work. So let’s head to low Earth orbit and look into why some people are saying we need to build AI data centers in space, and the chances they succeed.
First, the prevailing rationale on why this is even being considered in the first place:
Demand up: Energy demand from AI is “net-new” as it doesn’t replace any existing sources of demand. That is to say, AI use does not reduce demand elsewhere (e.g. air conditioning, refrigeration). If there are 100 units of existing energy demand and AI requires 20 units (analysts often underestimate AI power needs), there are now 120 units of demand, so increasing supply is a non-negotiable (unless prices go way up).
Supply down: Global electrical output is flat everywhere outside of China. The U.S. has not been adding new units of supply. The time it takes to get new supply online is then the issue. Utility interconnect studies can take a year just to complete and there is a massive backlog of turbines, to name just two problems. So it currently looks like the U.S. can’t add supply fast enough to meet the existing demand and the new demand from AI.
Okay, so there’s a chance that the terrestrial (read: on Earth) data center build out is about to hit a brick wall made of copper and permits. Which creates a problem, as Elon puts it, “How are you going to turn the chips on? Magical power sources? Magical electricity fairies?”
I’m curious and optimistic about the role that cheap and abundant natural gas will play in the U.S. energy grid and AI boom over the next 5- to 10-years (with the turbine supply catching up eventually). But I will take the problem as given for now and focus on the solution as Elon presented it on the Cheeky Pints podcast:
Any given solar panel can do about five times more power in space than on the ground. You also avoid the cost of having batteries to carry you through the night. It’s actually much cheaper to do in space. My prediction is that it will be by far the cheapest place to put AI. It will be space in 36 months or less.
It’s a neat narrative. It addresses the constraints of terrestrial AI data centers, and sounds almost too cool to be true. Never one to shy away from a bold call, he goes on to predict that within five years, SpaceX will launch more AI capacity into space every year than the cumulative total of all AI on Earth today. That’s truly mind-blowing.
But we know Elon’s record with bold predictions. He has a low success rate on the timing, and a high one on the outcome. Arguably the key here isn’t the timeline, it’s the destination. At least some AI compute is heading to space. But how much?
The blocker isn’t what most people expect it to be. Launch is becoming less of a gating constraint. Starship’s reusable heavy lift will soon place over 100 metric tonnes into orbit per flight. The real problems are thermal, electrical, and operational. GPU-dense clusters produce enormous heat, and while Earth’s atmosphere lets you cool them cheaply with convection, orbit gives you nothing but vacuum. Radiators work, but they scale poorly and add mass fast.
Power is similarly complicated: solar flux in orbit is strong, but AI clusters need continuous, dispatchable power and low Earth orbit means regular eclipse periods that demand heroic engineering to bridge. And unlike terrestrial data centers where a technician can walk in and fix a failing server, orbital hardware must be autonomous from day one. Each of these problems is solvable. But none of them will be solved within 36 months.
Hyperscalers like Microsoft and Amazon are currently solving AI scaling by colocating compute with power. Space reverses that logic. Instead of bringing power to compute, you bring compute to hostile energy conditions. So from a purely execution standpoint, the 36-month timeline is aggressively unrealistic for meaningful deployment.
So what’s feasible? Strip away the showstopping numbers and something more credible and strategically coherent emerges underneath. A demonstration-scale inference cluster (not training) optimized for specific workloads on the edge is not absurd for a pathfinder system in the next 36-months. Think “minimum viable orbital compute”.
Space is ready to occupy a specific and genuinely valuable niche: wildfire detection over regions with zero connectivity, defense applications where the requirement is precisely that data never touches the ground, coordination logic for satellite constellations. Narrow use cases. High strategic value. Near-term feasibility. That’s the real story here. Not ChatGPT in orbit, but a surgical wedge into workloads that ground-based infrastructure structurally cannot serve.
There we have it. A million-GPU cluster in orbit is highly unlikely anytime soon. Terrestrial grids, particularly those backed by nuclear or gas, offer 24/7 dispatchable power at far lower capital intensity. And the market will solve the supply side. For the time being, this is probably a narrative that’s being teased for the mooted SpaceX IPO. It wouldn’t be the first time. Walter Isaacson’s biography of Elon Musk includes this passage:
Night after night, Musk sat upright on the edge of his bed next to Grimes, unable to sleep. Some nights he did not move until dawn. Tesla had survived the surges and storms of 2018, but it needed to raise another round of financing to keep operating, and the short sellers were still circling like vultures. In March 2019, he reentered crisis-drama mode. “We have to raise money or we’re [in trouble],” he said to Grimes one dawn. He needed to come up with a grand idea that would turn the narrative around and convince investors that Tesla would become the world’s most valuable car company. (Emphasis added, profanity removed)
So Elon has form. He’s used narratives to achieve his goals in the capital markets before. But given that his mission is Mars and making us a multiplanetary species, and this maneuver will provide access to more capital at a lower cost of capital, let’s give him a pass on this one.
Coming full circle, The Great Wall of China largely failed at its main purpose. The wall was meant to keep out nomadic invaders from the North, particularly the Mongols. But it didn’t work very well. The Mongols under Genghis Khan and later Kublai Khan conquered China anyway, often simply by finding gates left open by bribed guards or sympathetic officials.
Perhaps the purpose of the aggressive timelines on space-based data centers isn’t to do inference in space anytime soon. It’s a narrative that’s needed to raise capital for the main thing (just, you know, going to Mars ¯\(ツ)/¯). Whether it works very well, we’ll find out soon.
For now, file this one, at best, under Full Self Driving. Promised as a year away every year for a decade. But more likely Hyperloop or The Boring Company. Ideas that expand what seems possible, that Elon probably believes in, and that will arrive later, smaller and stranger than advertised. The orbital data center will exist. Just not as described, and not when promised.
Please “Like ♥️” this piece!



