By Futurist Thomas Frey

The Resistance Won’t Come From Fear—It’ll Come From Understanding

If you strip away the poetry and look at power dynamics, the biggest resistance to AI, robotics, and automation won’t come from people who “don’t understand the technology.” It will come from those who understand it perfectly—and see exactly what it threatens: their wealth, their influence, and their control over systems designed to extract value through scarcity and friction.

Let me walk you through the resistance groups forming right now, how they’ll fight, and why the battle won’t look like opposition—it’ll look like controlled adoption designed to preserve existing power structures while appearing to embrace progress.

Legacy Capital Built on Scarcity and Friction

Large financial institutions dependent on opacity, fees, and information asymmetry see AI as existential threat. When discovery, forecasting, and optimization become broadly accessible, many traditional “value-add” layers disappear. Why pay wealth managers 1.5% annually when AI provides superior investment analysis for free? Why pay transaction fees when automated systems execute trades more efficiently?

How they resist: Favor regulation protecting incumbents under the banner of “stability.” Slow deployment of AI in areas reducing fees, labor arbitrage, or rent-seeking. Fund narratives emphasizing risk over opportunity. The strategy isn’t blocking AI—it’s ensuring AI adoption strengthens existing financial institutions rather than democratizing financial services.

Industrial Giants with Depreciating Physical Moats

Energy incumbents, chemical and materials conglomerates, and manufacturing giants with massive sunk capital see AI-accelerated discovery shortening innovation cycles and lowering barriers to entry. Their advantage has always been scale and capital intensity—both eroded by simulation, generative design, and automated labs.

A startup with AI-driven materials discovery can compete with companies that spent decades building physical R&D infrastructure. That’s terrifying for incumbents whose competitive advantage is capital intensity.

How they resist: Push incremental AI adoption optimizing existing processes rather than replacing them. Acquire or suppress disruptive startups rather than scaling them. Lobby against rapid standards shifts that would obsolete infrastructure. They’ll adopt AI—but only in ways reinforcing their existing advantages.

Healthcare Power Structures (Not Medicine Itself)

Insurance monopolies, hospital chains optimized for billing rather than outcomes, and pharma portfolios dependent on blockbuster timelines see personalized medicine, real-time diagnostics, and AI-driven drug discovery undermining fee-for-service models, long approval pipelines, and one-size-fits-all treatments.

AI threatens to collapse healthcare costs by 60-80% through prevention, early detection, and personalized treatment. That’s catastrophic for systems profiting from expensive interventions rather than healthy populations.

How they resist: Emphasize regulatory caution selectively. Limit reimbursement for AI-driven prevention. Frame radical improvement as “unproven” while defending inefficient norms. The messaging won’t be “we oppose better healthcare”—it’ll be “we must ensure safety” while maintaining systems designed for maximum extraction.

Educational and Credentialing Gatekeepers

Traditional universities with endowment-driven inertia, testing and accreditation monopolies, and publishers built for pre-AI pedagogy see AI collapsing the monopoly on knowledge, instruction, and assessment. Learning no longer needs time-based seat requirements or standardized pacing when AI provides personalized education superior to classroom instruction.

Why pay $200,000 for a degree when AI-powered platforms like Cogniate provide personalized education demonstrating actual competency rather than seat time?

How they resist: Preserve credential scarcity as a proxy for value. Attack AI-based learning as “lower quality” despite evidence to the contrary. Delay recognition of alternative credentials and competency proof. They’ll adopt AI in classrooms while fighting systems threatening their gatekeeping role.

Regulatory Power Brokers and Institutional Bureaucracies

Agencies optimized for control rather than speed, international bodies built for consensus rather than iteration, and legal frameworks frozen in pre-exponential assumptions struggle as AI moves faster than institutional authority. Governance built for linear change can’t handle compounding capability.

How they resist: Default to restriction over redesign. Frame permissionless innovation as chaos. Centralize AI power “for safety” rather than distribute it with accountability. The strategy is capturing AI development within existing regulatory frameworks rather than adapting frameworks to AI’s capabilities.

Narrative Controllers and Attention Economies

Media conglomerates, platform companies monetizing outrage and fear, and think tanks funded to shape public perception see AI threatening their role as filters and framers of reality. When synthesis becomes cheap, influence becomes fragile.

How they resist: Amplify AI fear narratives. Focus on job loss instead of job transformation. Reduce complex futures to simplistic moral panics. The goal isn’t informing the public—it’s maintaining their role as essential intermediaries between information and understanding.

The Uncomfortable Truth: Controlled Adoption

The strongest resistance won’t be overt opposition. It will be controlled adoption. Not “no” to AI—but:

  • “Yes, but only inside existing power structures”
  • “Yes, but slowly”
  • “Yes, but centrally controlled”
  • “Yes, but without changing who benefits”

This is how revolutions are neutralized without ever appearing stopped. Every transformative technology—electricity, automobiles, the internet—was first captured by existing power structures before eventually democratizing under pressure.

The Real Battle Line

This era isn’t dividing technologists from skeptics. It’s dividing those who see AI as a force multiplier for human potential from those who see it as a tool for preserving dominance.

History is clear: every step-change technology is first used to consolidate power before it’s allowed—often under pressure—to expand humanity. The printing press was initially controlled by churches and monarchies. The internet was initially dominated by large institutions before becoming broadly accessible.

Why They’ll Lose Anyway

The difference this time is speed. The window between consolidation and democratization is shrinking. AI capabilities double every 6-12 months. Regulatory capture that once took decades to overcome now faces technological advancement that makes captured systems obsolete before consolidation completes.

Open-source AI models, declining compute costs, and global development mean no single power structure can contain the technology long enough to permanently capture it. The resistance will slow adoption, create inefficiencies, and extract rents temporarily—but they can’t stop the fundamental transformation.

Final Thoughts

The resistance groups forming now aren’t villains—they’re rational actors protecting existing interests. But their strategy of controlled adoption preserving current power structures while appearing to embrace progress won’t work long-term because AI’s trajectory makes their value-extraction models obsolete faster than they can adapt.

Those who act with courage, intelligence, and ethical scale now will decide whether this epoch becomes a renaissance expanding human capability—or simply the past, accelerated with more efficient extraction. The battle isn’t technology versus humanity. It’s democratized capability versus concentrated control.

And this time, for the first time in history, the technology might be moving too fast for the consolidators to win.


Related Articles:

2026: The Year Society Realizes What “Systems Running Themselves” Actually Means

The Problems Nobody Sees Coming in 2026: When Systems Become Too Good to Survive Failure

Government 2040: Super Democracy vs. Algorithmic Authoritarianism Lite