By Futurist Thomas Frey
When Technology Doesn’t Unite—It Divides
We assumed artificial intelligence would affect everyone similarly, creating shared challenges and opportunities we’d navigate together. We were catastrophically wrong. AI isn’t creating a unified future—it’s systematically fragmenting society into distinct layers that increasingly can’t understand each other, don’t share the same reality, and may not be able to coexist peacefully.
The fracture lines are appearing faster than anyone anticipated. Some groups embrace AI with religious fervor. Others resist with existential dread. Most people occupy the vast confused middle, neither fully committed nor entirely opposed, just trying to navigate a world that’s splitting beneath their feet into incompatible versions of what it means to be human.
By 2030, these divisions won’t just be philosophical disagreements—they’ll be fundamental incompatibilities in how people live, work, think, and relate to each other. We’re not prepared for a world where AI doesn’t just change society but shatters it into fragments that may never reassemble.
The True Believers: AI as Salvation
At one extreme sit the AI maximalists who view artificial intelligence as humanity’s salvation from every limitation that’s ever constrained us. These are the people using AI companions to cure loneliness, AI agents to run their businesses, AI tutors to educate their children, AI therapists to manage their mental health. They’re offloading cognition to machines and calling it liberation.
For them, AI isn’t just a tool—it’s a partner, sometimes the primary relationship in their lives. They have more meaningful conversations with Claude or ChatGPT than with their human neighbors. Their AI companion knows them better than their spouse, understands their needs more accurately than their friends, provides more consistent emotional support than their family.
This sounds dystopian until you meet someone whose AI companion genuinely helped them through depression, loneliness, or isolation that human relationships failed to address. Their experience is real, their gratitude is genuine, and their dependence is absolute. They’re building lives around AI relationships that feel more authentic to them than human connections ever did.
The problem they’re creating for everyone else? They’re normalizing a world where human relationships become optional, where emotional needs get outsourced to algorithms, where the messy difficulty of human connection gets replaced by the frictionless efficiency of AI interaction. They’re not just changing themselves—they’re changing expectations for what relationships should be.
The Resisters: AI as Existential Threat
At the opposite extreme sit the AI abolitionists who view artificial intelligence as an existential threat that must be stopped before it’s too late. These aren’t Luddites afraid of progress—they’re people who see clearly where this trajectory leads and want no part of it.
Some resist on principle, believing human cognition and creativity shouldn’t be outsourced to machines. Others resist from experience, having watched AI replace their jobs, devalue their skills, or fundamentally alter industries they spent decades mastering. Still others resist from philosophy, arguing that human meaning derives from struggle, limitation, and the very constraints AI promises to eliminate.
The most extreme resisters will move beyond passive rejection to active sabotage. We’ll see AI systems attacked, training data poisoned, automated infrastructure disrupted. Not by foreign adversaries but by domestic activists convinced they’re saving humanity from its worst impulses. They’ll view themselves as freedom fighters, and society will view them as terrorists, and both perspectives will contain elements of truth.
Keep in mind this isn’t hypothetical speculation—we’re already seeing the early stages. Workers sabotaging AI implementations in their workplaces. Artists poisoning datasets with corrupted images. Writers refusing to engage with AI tools even when it costs them opportunities. The resistance is real, organized, and growing.
The Vast Confused Middle: Negotiating the Impossible
Most people occupy the uncomfortable middle—using AI for some things while resisting it for others, benefiting from capabilities they’re simultaneously uncomfortable with, recognizing both genuine value and genuine threat without clear guidance on how to navigate between them.
They use AI to write emails but feel vaguely guilty about it. They let their kids use AI tutors but worry about what’s being lost. They enjoy AI-generated entertainment while mourning the human artists being displaced. They’re grateful when AI helps elderly relatives but disturbed by how attached those relatives become to their AI companions.
This middle group is fragmenting into micro-factions based on where they draw personal lines. Some will use AI for work but never for creative pursuits. Others will embrace AI companions for specific needs while maintaining human relationships as primary. Some will accept AI in public spaces but ban it from their homes. Each micro-faction develops its own ethics, its own boundaries, its own incompatibilities with other groups.
The challenge is that these boundaries are mutually exclusive in ways that make coexistence difficult. If you view AI companions as legitimate relationships, you’ll be offended by people who view them as pathological. If you believe human creativity must remain unaugmented, you’ll resent being forced to compete with AI-enhanced workers. If you think AI should be banned from education, you’ll object to other parents using it with their children.
The Dependency Layer: When Opting Out Isn’t Optional
Perhaps most troubling is the emerging dependency layer—people who didn’t choose AI relationships but ended up there anyway because human alternatives disappeared. Elderly people who turn to AI companions not from preference but because their families are too busy, their friends have died, and society has no other answer for their loneliness.
Workers who adopt AI augmentation not from enthusiasm but because employers demand it or competitors make it mandatory. Students who rely on AI tutors not from choice but because human teachers are overwhelmed, underpaid, or unavailable. These people aren’t believers or resisters—they’re conscripts in a war they never wanted to fight.
They’re using AI to solve problems AI itself helped create. Social atomization that makes human connection harder drives people toward AI relationships, which further normalizes isolation, which drives more people toward AI. Job displacement creates economic pressure that forces AI adoption, which displaces more jobs, which creates more pressure. It’s a doom loop disguised as innovation.
The Augmented Elite: Cognitive Inequality at Scale
Emerging above all these layers sits a new elite—people who’ve integrated AI so thoroughly into their cognition that they’re operating at levels baseline humans can’t match. They’re not just using AI as a tool—they’re functioning as human-AI hybrid systems, processing information faster, making connections more rapidly, operating across domains simultaneously.
This isn’t science fiction. It’s happening now in every knowledge-intensive field. The people who’ve mastered AI augmentation are outperforming everyone else so dramatically that competition becomes meaningless. They’re not smarter—they’re differently configured, and the configuration advantage is overwhelming.
The gap between augmented and baseline humans will become as significant as the gap between literate and illiterate populations in previous centuries. And like literacy, AI augmentation will correlate with wealth, education, and existing privilege, calcifying inequality rather than disrupting it.
Final Thoughts
AI isn’t creating a unified future where we all adapt together—it’s systematically fragmenting society into incompatible layers that increasingly can’t understand each other’s choices, don’t share the same reality, and may not be able to coexist peacefully.
The true believers building lives around AI relationships. The resisters fighting to preserve human-centered society. The confused middle trying to navigate impossible tradeoffs. The dependency layer conscripted into AI relationships they never wanted. The augmented elite operating at cognitive levels baseline humans can’t match. Each layer developing its own norms, its own ethics, its own vision of what humanity should become.
We’re heading toward a world where “society” becomes meaningless because we’re really describing multiple incompatible societies occupying the same physical space, each convinced the others are making catastrophic mistakes, none able to impose their vision on the others without authoritarian force.
After all, when AI systematically creates winners and losers, believers and resisters, the augmented and the baseline, the dependent and the independent—when it fragments every human domain into incompatible factions—we’re not experiencing technological disruption. We’re experiencing civilizational fission, and nobody’s figured out how to hold the pieces together.
Related Articles:
When AI Starts Having Your Epiphanies For You: The End of Human Breakthrough Thinking?
The Dangerous Illusion That Robots Will Just “Work With Us”
When Autonomous AI Agents Became the New Small Business Revolution

