Every era has its defining question. Ours may be this: What happens when intelligence itself becomes a resource that outpaces us—by orders of magnitude we can barely imagine?

Elon Musk recently put it bluntly: “I think we’re quite close to digital superintelligence. It may happen this year, maybe it doesn’t happen this year—next year for sure.” Whether you take his timeline literally or not, the very fact that leading voices in AI and quantum research are openly discussing artificial superintelligence (ASI) means the world is entering a point of no return.

So, what exactly is ASI? Definitions vary, but they circle a common idea: intelligence that is smarter than any human in every domain. Some describe it as “creativity, problem-solving, and foresight beyond human capacity.” Others simplify it: “Smarter than any human at anything.” However you phrase it, ASI implies a shift not just in tools we use but in the hierarchy of intelligence itself.

Warp Speed for Science

One of the most provocative implications of ASI is the acceleration of discovery. Imagine compressing a century of breakthroughs into a single year. Anthropic CEO Dario Amodei has suggested we could see a doubling of the human lifespan in the next decade, thanks to AI-driven biology and medicine. Beyond health, ASI could engineer new materials, design molecular catalysts that unlock new industries, or solve water scarcity with radical new desalination techniques.

If today’s AI already drafts code, analyzes proteins, and generates art, what happens when its “cognitive horsepower” is a billion times greater than ours? That’s roughly the same leap as from a hamster to a human. With that kind of gap, it’s not hyperbole to say ASI could discover entirely new branches of physics and redefine the boundaries of human survival.

The Last ASI?

But there’s a darker side to the narrative. Jeff Clune, an advisor to DeepMind, has argued that “the first ASI is likely to be the last ASI.” The reasoning is simple: a self-improving superintelligence could block or suppress the development of rivals. In a winner-take-all dynamic, whoever builds the first aligned ASI might control the future—while whoever fails may be locked out permanently.

This is why governments are already whispering about nationalization. If a company within your borders creates what amounts to a godlike intelligence, do you really allow it to remain private? Or do you seize it, claiming it as a matter of national security? The stakes are not merely competitive—they are existential.

Betting on the First—and Last—Superintelligence

Consider Ilya Sutskever, the cofounder of OpenAI, who left in 2024 to start Safe Superintelligence (SSI). His mission is nothing less than to build the first aligned ASI. Investors rushed in with $6 billion, giving the startup a $32 billion valuation before it had a product. Why? Because if Jeff Clune is right—that the first ASI is the last—then the “winner” of this race doesn’t just dominate markets. They define the trajectory of civilization itself.

This explains why some of the world’s most powerful venture firms, from a16z to Sequoia to Nvidia, couldn’t afford to sit on the sidelines. If the prize is “owning” superintelligence, then missing out isn’t just bad business—it’s an irreversible loss of influence over the future.

The Questions We Must Confront

For all the optimism, the hard questions remain. Who decides how an ASI is aligned? What values are baked into its architecture? And how do we ensure that humanity, in all its messy diversity, remains at the center of decisions made by something billions of times smarter than we are?

We’ve built powerful tools before—fire, nuclear energy, the internet—but none with the capacity to outthink us in every domain. If intelligence is the ultimate advantage, then creating an entity with near-infinite amounts of it forces us to ask whether we are preparing for partnership, stewardship, or obsolescence.

Whether ASI arrives in 2025 or 2035, one thing is certain: the race is already underway, and the outcome will reshape everything from science to sovereignty. This isn’t just another tech revolution. It’s the beginning of a new evolutionary hierarchy—one where humans may no longer be at the top.

For further reading: