By Futurist Thomas Frey
The Blueprint That Promises Everything
Peter Diamandis and Alexander Wissner-Gross just released “Solve Everything: Achieving Abundance by 2035″—a comprehensive blueprint for using AI to systematically solve nearly every major human challenge within a decade. Their central thesis: superintelligence is no longer a question of “if” but of “where we point it.”
They introduce frameworks like the “Industrial Intelligence Stack” for converting real-world problems into solvable systems. They outline 15 ambitious “Moonshots” from organ abundance to clean energy to orbital debris removal.
I’m a fan of both authors. The framework represents sophisticated thinking about AI’s potential. But when people get involved, even brilliant plans go sideways. Technology deploys into human systems filled with politics, incentives, and messy realities that resist elegant frameworks.
Let me take this vision seriously while examining it from the human perspective the blueprint sometimes overlooks.
What They Get Profoundly Right
The core insight about abundance is correct. Every major civilizational advance—agriculture, electricity, computing—made scarce resources abundant. If AI makes expert-level intelligence cheap and widely available, that’s potentially transformative.
The “domain maturation curve” is useful. Their six-stage model from L0 (“The Muddle”) to L5 (“Solved,” reliable as tap water) helps predict what becomes possible when. Different domains are at different stages, and understanding where each sits matters.
Outcome-based metrics matter. The shift from measuring inputs (hours worked) to outcomes (problems solved) is overdue. Their “Return on Cognitive Spend” concept pushes thinking in the right direction.
The timeline feels plausible. Unlike projections claiming everything changes immediately or nothing happens for decades, their 2026-2035 timeframe aligns with where technology actually is.
These are genuine contributions.
Where Human Reality Gets Messy
The blueprint treats intelligence as a fully commoditizable resource needing proper infrastructure and targeting. But human systems don’t work like that.
People resist change even when it benefits them. The plan acknowledges “The Muddle” (bureaucratic resistance) but treats it as a problem to overcome through better frameworks. In reality, institutional resistance reflects legitimate concerns about disruption and power shifts. You can’t framework your way past human politics.
Consider healthcare. The blueprint envisions AI rapidly advancing medicine from fragmented to solved commodity. But healthcare isn’t primarily a technical problem—it’s regulatory, economic, and political. We have treatments that work but can’t get approved for years. AI making discovery faster doesn’t automatically fix these human-created bottlenecks.
Coordination at scale is brutally difficult. The plan requires global cooperation on data trusts, compute allocation, and outcome verification. Getting countries to coordinate while competing economically and geopolitically? Getting corporations to share data when data is competitive advantage? The 18-month “strategic window” for convergence assumes everyone acts rationally. History suggests coordination failures are more likely.
The “solved” state might not be achievable. Many domains are fundamentally complex in ways that resist commoditization. Biology isn’t physics. Living systems are chaotic, context-dependent, full of emergent properties. You can’t always “pour compute” into biological problems like pouring concrete.
Take longevity research. AI can help navigate complexity, but claiming we’ll reach “longevity escape velocity” (where life expectancy increases faster than we age) by 2035 is magical thinking.
Inequality could get worse. The blueprint emphasizes “Universal Basic Capability” and fairness, but mechanisms are vague. If compute becomes critical, wealth concentrates with whoever controls it. If outcome-based systems reward AI proficiency, the digitally sophisticated pull ahead. The plan assumes surpluses get reinvested in fairness, but surpluses typically flow to whoever captured them.

The Unintended Consequences Nobody Discusses
When everything is measured, metrics get gamed. Goodhart’s Law: “When a measure becomes a target, it ceases to be a good measure.” Medical systems already game metrics. AI systems optimizing for benchmarks will find clever ways to hit targets without solving underlying problems.
Abundance might not feel abundant. Humans adapt quickly to improvements. What felt miraculous becomes normal within months. The blueprint envisions life extension, disease elimination, cheap energy. But will people in 2035 feel abundant? Or find new scarcities—status, meaning, purpose, authentic connection?
The “Quiet Hum” scenario describes a world where everything just works. But humans thrive on challenges, growth, contribution—not on things working automatically. Material abundance could mean psychological impoverishment.
We might solve the wrong problems. The Moonshots focus on quantifiable challenges: organ availability, energy cost, materials discovery. But what about meaning crisis? Social fragmentation? Purpose in post-scarcity? Mental health in abundance? These are harder to frame as technical problems, so frameworks naturally de-prioritize them.
How This Actually Unfolds
Here’s my prediction:
Phase 1 (2026-2027): The Hype Cycle. Early AI successes in narrow domains create enthusiasm. Investment floods in. Moonshot projects launch. Optimism peaks.
Phase 2 (2027-2028): Early Wins. Real progress in contained domains—drug discovery, materials science, software verification. Proofs of concept validate the approach. More ambitious goals get funded.
Phase 3 (2028-2030): Reality Check. Messier domains hit walls. Biology proves more complex. Human systems resist change. Coordination failures emerge. Regulation lags. Early enthusiasm meets practical constraints.
Phase 4 (2031-2033): The Bifurcation. Some domains collapse to commoditization—probably software, maybe materials, possibly narrow medical applications. Others stall at L3 or L4. We get a mixed landscape, not universal abundance.
Phase 5 (2034-2035): New Equilibrium. Dramatically better tools and some genuinely solved problems. But also new problems created by solutions, inequality concerns inadequately addressed, human challenges resistant to technical fixes. Life is better in measurable ways, but doesn’t feel like abundance because humans adapted expectations upward.
This isn’t failure. It’s how transformative technologies actually deploy.

The Questions We Should Be Asking
Who decides the targets? The plan calls for “Targeting Authorities.” But who chooses them? Whose values do they represent?
How do we handle transition pain? Even reaching abundance by 2035, the transition could be brutal. How do we support people whose expertise becomes obsolete?
What’s the governance model? Global coordination requires unprecedented cooperation. What institutions do this? What gives them legitimacy?
How do we preserve human agency? If AI makes better decisions across domains, what role do humans play? Is that psychologically sustainable?
What’s actually worth solving? Not all problems should be optimized away. Struggle creates meaning. Constraints drive creativity.
The Path Forward That Might Actually Work
Start with narrow, contained domains where failure is reversible. Build confidence before tackling systemic challenges.
Build in human override everywhere. Humans should always be able to intervene, question, redirect. Efficiency is good; unaccountability is dangerous.
Prioritize distribution alongside capability. For every advance, ask: How do benefits get shared? What redistribution ensures this doesn’t just empower the already powerful?
Measure what matters to humans. Track meaning, connection, agency, satisfaction—not just efficiency. If people are materially better off but psychologically worse, the system failed.
Plan for problems you create. Every solution creates new problems. Build adjustment mechanisms from the start.
Why I’m Still Optimistic (But Cautiously)
Despite these concerns, I’m not pessimistic. The “Solve Everything” blueprint represents important thinking. The technology is real. The potential is genuine. The framework could work—in narrow domains, with oversight, with thoughtful governance, with humility.
My concern isn’t that the vision is wrong—it’s incomplete. It focuses on what’s technically possible while underweighting what’s humanly difficult. It treats coordination problems as solvable through frameworks when they’re often more fundamental.
The future won’t be clean progression from L0 to L5 across all domains. It’ll be messy, uneven, full of surprises. Some things will work better than hoped. Others will fail unexpectedly. Humans will resist change, game systems, find new scarcities.
That’s not a bug—it’s a feature. The human element isn’t what we need to overcome to reach abundance. It’s what abundance is supposed to serve.
So yes, let’s build the infrastructure. Let’s pursue Moonshots. But let’s do it with eyes open to human complexity, humility about predicting outcomes, systems preserving agency and meaning, and constant attention to who benefits and who gets left behind.
The revolution is coming. The question isn’t whether we can achieve technical breakthroughs—we probably can. The question is whether we deploy them in ways that serve human flourishing rather than just hitting benchmarks.
That requires not just brilliant technologists but wisdom about human nature, social systems, and what makes life worth living.
We need both the vision and the caution. The blueprint and the humans. The abundance and the meaning.
That’s the future worth building.
Related Articles:
Solve Everything: Achieving Abundance by 2035 – The full blueprint by Dr. Alexander Wissner-Gross and Dr. Peter Diamandis
The Challenges of Coordinating AI Development
Why Technological Abundance Doesn’t Guarantee Human Wellbeing

