By Futurist Thomas Frey

The Economics Just Flipped

Here’s the statement that should stop you cold: Within 36 months, space will be the cheapest place to deploy AI compute infrastructure. Not the most advanced. Not the most cutting-edge. The cheapest.

This isn’t speculation from a think tank. This is Elon Musk’s engineering timeline, traced back to first principles and grounded in physics that doesn’t care about our intuitions. And if he’s right—and the math suggests he is—we’re about to witness the largest infrastructure migration in human history.

Not because space is cool. Because space is economical.

Let me walk through why this matters, what it unlocks, and why the next three years will determine whether we’re participants or spectators in the next phase of industrial civilization.

The Power Problem Nobody’s Solving on Earth

The bottleneck isn’t chips. It’s not even AI models. It’s electricity.

Earth’s electricity output outside China is flat. It’s not growing. Meanwhile, chip production is growing exponentially, and each new generation of AI training requires orders of magnitude more power. You can’t manufacture what you can’t turn on.

To power xAI’s Colossus cluster—one gigawatt—Elon’s team had to gang together turbines, fight through Tennessee permits, build infrastructure across state lines in Mississippi, and run high-voltage transmission lines for miles. This was described as “a miracle in series.” And it’s one gigawatt. One.

The limiting factor? Turbine blades and vanes. There are three casting companies in the world that make them. They’re sold out through 2030.

You can’t scale by building more power plants. The industrial base to build power plants doesn’t exist at the required scale. And even if it did, the permitting timeline would stretch into decades. You can’t cover Nevada in solar panels fast enough to matter.

So where do you go when Earth runs out of room?

You go where the room is infinite and the sun never sets.

Why Space Solar Changes Everything

Solar panels in space generate five times more power than on Earth. The physics is straightforward:

No atmosphere means no 30% energy loss from scattering and absorption. No day-night cycle means continuous operation. No clouds, no weather, no seasonal variation. It’s always peak solar output.

No batteries needed. On Earth, solar requires massive battery arrays to store energy for nighttime operation. In space, you generate power 24/7. When you factor in storage costs, space solar becomes ten times cheaper than terrestrial solar.

Unlimited scale. There’s no permitting process for orbital real estate. There’s no environmental impact study for placing a data center in the vacuum. Once you solve launch economics, you can deploy infrastructure at a pace that would be impossible on Earth.

This is the unlock. Starship achieving full reusability and high launch cadence doesn’t just make space accessible. It makes space the path of least resistance for anyone trying to deploy compute at scale.

Elon’s prediction: “In 36 months, probably closer to 30 months, the most economically compelling place to put AI will be space. It will then get ridiculously better to be in space.”

Five years from now? SpaceX will be launching more AI compute capacity per year than the cumulative total currently operating on Earth. A few hundred gigawatts annually, rising toward a terawatt.

The Moon Factory: When Launch Becomes the Bottleneck

A terawatt of compute in space requires roughly 10,000 Starship launches per year. That’s one launch every hour, continuously, for a year.

Even with full reusability, Earth launch at that scale becomes the constraint. Which is why the roadmap doesn’t stop at orbital data centers. It goes to the Moon.

Lunar soil is 20% silicon. Mine it, refine it, manufacture solar cells on-site. Aluminum for radiators—also abundant on the Moon. The chips? Those you’d send from Earth initially, because they’re light. Eventually, you manufacture those on the Moon too.

Then comes the electromagnetic mass driver. A lunar catapult shooting AI satellites into deep space at 2.5 kilometers per second. No propellant needed, just electricity. Launch capacity: billions to tens of billions of tons per year.

This isn’t science fiction. This is an engineering roadmap with clearly defined steps, each one unlocking the next. With a Moon factory and mass driver, you reach a petawatt per year—a millionth of the Sun’s total energy output, which is still 100,000 times larger than Earth’s entire economy today.

We’re not talking about incremental improvements to terrestrial infrastructure. We’re talking about abandoning terrestrial constraints entirely.

Digital Human Emulation: The Unlock Before the Robots

While orbital data centers are being built, something else is converging: AI that can operate any software a human with a computer can operate.

Think about this carefully. Before you have physical robots reshaping the material world, the most an AI can do is move electrons. The limiting case of electron-moving is a human at a computer.

If you can emulate that—if AI can use Excel, Photoshop, CAD software, chip design tools, accounting systems, CRMs, literally any desktop application—you’ve created the most valuable product in economic history.

The math is brutal. The most valuable companies by market cap produce digital output. Nvidia FTPs files to Taiwan. Apple sends files to China. Microsoft, Meta, Google—all digital. If you have a human emulator, you can replicate the operational core of a trillion-dollar company overnight.

Start with customer service. That’s 1% of the global economy, close to $1 trillion annually. AI can use the same apps outsourced teams use today. No API development needed. No integration. Just operate the software.

Then move up the difficulty curve. Chip design. Run Cadence and Synopsys. Execute 10,000 simulations simultaneously. Eventually, the AI knows what the chip should look like without using tools at all.

By the end of 2026, Elon expects digital human emulation to be solved. Not “getting better.” Solved.

This is the precursor to physical robots. Once AI can operate in the digital realm at human-equivalent or better performance, the transfer to physical manipulation becomes an engineering problem, not a conceptual one.

Optimus: The Recursive Exponential

Once you have digital human emulation, humanoid robots become what Elon calls “the infinite money glitch.”

The compounding is multiplicative, not additive: exponentially improving digital intelligence × exponentially improving chip capability × exponentially improving electromechanical dexterity. Then the robots start building the robots.

There are only three hard problems for humanoid robots:

Real-world intelligence. Tesla solved this for autonomous vehicles. The transfer to robots is direct—both require navigating unstructured environments and making real-time decisions based on sensory input.

The hand. This is harder than everything else combined. Human hands are evolutionary masterpieces—27 bones, 34 muscles, millions of years of optimization. Tesla’s designing custom actuators, motors, gears, and power electronics from first principles. There’s not a single off-the-shelf component.

Scale manufacturing. Optimus Gen 3 targets a million units per year. Gen 4 targets ten million per year. This isn’t a prototype. This is industrial-scale deployment.

For training, Tesla will build an “Optimus Academy”—10,000 to 30,000 robots doing self-play in the real world, testing different tasks. Millions more in simulation, using Tesla’s physics-accurate environment to close the sim-to-real gap.

Best initial use case? Any 24/7 operation. Robots don’t sleep. For edge compute applications where power is distributed, you charge robots at night when the grid has 500 gigawatts of unused capacity sitting idle.

The economic implication: “Pure AI, pure robotics corporations will far outperform any corporations that have humans in the loop. This will happen very quickly.”

We’ve seen this movie before. Computation used to be a job. Entire office buildings full of humans doing calculations by hand. Now one laptop replaces them all. That’s the trajectory for physical labor once Optimus scales.

The China Problem: Why the Robot Front Matters

Here’s the uncomfortable math: In a competition based on human labor, America cannot win.

China has four times the U.S. population. America’s birth rate has been below replacement since 1971. We’re approaching more deaths than births domestically.

China does roughly twice as much ore refining as the rest of the world combined. For critical materials like gallium—essential for solar panels—China does 98% of global refining.

This year, China will exceed three times U.S. electricity output. Electricity is a proxy for industrial capacity. If their electrical output is 3x ours, their manufacturing capacity is roughly 3x ours.

So you have one quarter the population, potentially lower productivity per person, and declining demographics. The conclusion is stark: “We definitely can’t win on the human front.”

The only path forward? The robot front.

Get to hundreds of millions of humanoid robots per year, and you become the most competitive nation by far. The recursive loop closes fast—robots helping to build robots. But you have to close that loop before China does.

This is why Tesla is building lithium and nickel refineries in Texas. This is why they’re constructing the largest cathode refinery outside China—in fact, the only one in America. This is why Optimus has to succeed. This is why space manufacturing matters.

We’re not competing on labor anymore. We’re competing on automation speed, intelligence deployment, and how fast we can scale exponential technologies.

The Convergence Nobody’s Connecting

What makes this moment extraordinary isn’t any single technology. It’s the systems-level convergence.

You can’t put data centers in space without cheap launch. You can’t get cheap launch without Starship. You can’t build Starships fast enough without robots. You can’t build robots without chips. You can’t make enough chips without fabs. You can’t power fabs on Earth at scale without hitting electrical limits.

So you go to space.

It’s a self-reinforcing system. And once you achieve the first level—cheap orbital launch—every subsequent level becomes more achievable, not less.

This is how civilizations climb the Kardashev scale. You start by harnessing a tiny fraction of your star’s energy. You build infrastructure to capture more. You use that energy to build better infrastructure. You repeat.

We’re at the beginning of that curve. And the acceleration is about to become visible to everyone, not just the people building it.

The 36-Month Window

If the timeline holds—and there’s no physics reason it shouldn’t—we’re 30 to 36 months away from space being the economically optimal location for AI infrastructure.

That’s not “someday.” That’s 2027 or early 2028.

By 2030, orbital data centers could be operational at scale. By 2032, lunar manufacturing could begin. By 2035, the mass driver could be shooting satellites into deep space.

These aren’t milestones a century away. They’re milestones within a single presidential term.

The question isn’t whether this future arrives. The question is whether we participate or spectate. Whether American companies lead this transition or watch it happen from the ground while other nations claim the high frontier.

Because here’s what nobody’s saying out loud: once compute infrastructure moves to space, the nations that control orbital manufacturing control the future of artificial intelligence. And the nations that control the future of AI control everything downstream—economics, defense, innovation, quality of life.

This isn’t about bragging rights. It’s about who writes the rules for the next phase of civilization.

The 36-month clock is running. And it doesn’t care whether we’re ready.


Related Articles:

SpaceX’s Starship: The Economic Case for Reusable Rockets

China’s Dominance in Critical Mineral Refining

The Future of Lunar Resource Utilization