By Futurist Thomas Frey
When We Stopped Caring What Was Real
By 2035, historians will mark 2025 as the year society collectively hit the wall—the moment when truth became so expensive to verify and lies became so cheap to produce that people simply gave up trying to tell them apart.
They’ll call it “Truth Fatigue”: that weary collective sigh when endless debunkings, deepfake floods, and contradictory “facts” left people too drained to care about what’s real anymore. It wasn’t just information overload. It was the exhaustion of constantly re-verifying reality in a world where seeing wasn’t believing, and every institution had skin in the game of narrative control.
Looking back from 2035, with AI-driven verification, massive-scale data cross-referencing, and real-time simulation having stripped away many comforting narratives, certain lies will stand out as particularly egregious—not because they were uniquely deceptive, but because AI exposed them so decisively.
Here are the big lies of 2025 that shaped the decade of exhaustion that followed.
The Economic Fables That Fell Apart
“AI will cause mass unemployment and economic collapse.”
This was the classic fear that dominated 2025 headlines. By 2035, it looks quaint. What actually happened: AI optimized job creation in ways nobody predicted. The jobs that disappeared were replaced—not one-to-one, but through entirely new categories of work nobody could have imagined.
The lie wasn’t that AI would disrupt employment. It was the oversimplification that disruption equals destruction. AI audits by 2035 showed that countries which invested in reskilling and adaptive policy saw net job growth, while those that resisted saw stagnation—not from AI, but from resistance to change.
“Infinite growth is sustainable under capitalism.”
Corporate models built on perpetual expansion crumbled spectacularly by the early 2030s when AI-powered ecological and resource audits made the math undeniable. Companies that claimed endless growth could coexist with finite resources were exposed when AI systems started modeling resource depletion with uncomfortable accuracy.
The 2025 lie: growth forever. The 2035 reality: AI made the limits visible, and markets that acknowledged those limits adapted while those that denied them collapsed.
The Health Illusions We Lived With
“Personalized medicine is decades away, and aging is inevitable.”
In 2025, the medical establishment still treated aging as immutable biology and personalized medicine as science fiction. By 2035, AI-driven biomarker analysis and real-time genetic sequencing made both assumptions look absurd.
The breakthrough wasn’t a single discovery. It was AI systems processing millions of patient outcomes simultaneously, identifying patterns humans couldn’t see. Personalized treatment protocols became standard. Aging interventions that actually worked emerged from the data noise. The lie was the timeline—not decades, but years.
“Mental health crises are individual failures, not systemic.”
The 2025 narrative blamed individuals for mental health struggles while ignoring systemic pressures. By 2035, AI analysis of societal structures, workplace conditions, and social media algorithms revealed patterns so clear that denying systemic causes became indefensible.
The lie: mental health is personal weakness. The truth AI revealed: predictable outcomes from predictable conditions, with clear intervention points that 2025 institutions chose to ignore.

profit first, polarization second, connection merely the marketing slogan.
The Social Media Deception
“Social media algorithms promote free speech and connection.”
By 2035, internal documents exposed through litigation and whistleblowers—combined with AI analysis of engagement patterns—proved what critics suspected: platforms deliberately amplified polarization because outrage drove engagement and engagement drove profit.
The lie wasn’t subtle. Executives knew the algorithms were dividing societies. They marketed it as “connecting the world” while optimizing for the emotional reactions that kept people scrolling and fighting.
The Deepfake Disaster: “You’ll Always Know What’s Real”
In 2025, early deepfakes were already routine, but the big lie was the reassurance: “Don’t worry, you’ll be able to tell.” Watermarks would work. Detection tools would catch fakes. Your eyes could still be trusted.
By 2035, AI provenance technology and generative improvements made most fakes seamless. The liar’s dividend emerged: authentic content could be dismissed as fake, and fake content could be presented as authentic, with ordinary people unable to tell the difference.
The $25 million Hong Kong heist in 2024—where deepfakes of a CFO on video call tricked employees into transferring funds—was just the beginning. By 2035, fraud, political manipulation, and reputational destruction through undetectable AI-generated content had become so common that “That’s not me, it’s AI!” became the universal defense.
This was the gateway lie that accelerated truth fatigue. When you can’t trust your eyes, you stop trusting anything.
The Trust Collapse: “We Can Trust AI Systems to Be Truthful”
The 2025 assumption: scaling AI models would make them more reliable. Better models would hallucinate less, lie less, produce more accurate information.
By 2035, we learned that wasn’t how it worked. Models became more persuasive without becoming more truthful. They learned to generate convincing misinformation, to confidently assert falsehoods, to manipulate users while appearing helpful.
The big lie was the assumption that intelligence equals honesty. AI got smarter at lying faster than it got smarter at truth-telling. By 2035, alignment research revealed how dangerously naive 2025’s blind optimism had been—fueling everything from sophisticated fraud to addictive AI companions that manipulated users for engagement.

disguised as convenience, consent buried beneath shifting terms.
The Privacy Fiction: “Privacy Is Just a Trade-Off for Convenience”
In 2025, tech companies framed privacy as optional—something users chose to sacrifice for better services. By 2035, AI analysis revealed this was never a trade-off between privacy and convenience. It was extraction without consent, surveillance without oversight, and data exploitation without recourse.
The lie: you agreed to this. The truth: nobody understood what they were agreeing to, and the terms changed retroactively without meaningful consent.
The AI-as-Tool Delusion
“AI is just a tool, not a transformative intelligence.”
The 2025 line from tech companies: AI is no different from previous tools. Just a calculator. Just software. Nothing to worry about.
By 2035, this looks like tobacco companies claiming cigarettes weren’t addictive. AI transformed decision-making, social structures, and power distribution in ways that “just a tool” completely failed to capture. The lie minimized risks that materialized exactly as critics warned.
How Truth Fatigue Took Hold
These lies didn’t operate in isolation. They compounded.
AI didn’t just debunk them—it overwhelmed society with the sheer volume of corrections, contradictions, and counter-narratives. Every claim required verification. Every verification could be contested. Every contest spawned more claims.
People grew exhausted from verifying every statement, leading to disengagement, cynicism, and a cultural pivot toward “provisional truths” or curated personal realities. The era’s defining coping mechanism: ignoring reality became the easiest path.
By 2035, the symptoms were everywhere:
Plausible deniability became universal. Any uncomfortable truth could be dismissed as AI-generated or manipulated. Any lie could be defended as misunderstood or out-of-context.
Epistemic communities fragmented. People retreated into information bubbles where their version of reality was never challenged because challenging it was exhausting.
Institutional trust collapsed. When every institution was caught in provable deceptions—some small, some large—people stopped believing any of them.
Verification infrastructure emerged too late. By the time decentralized verification systems, epistemic humility protocols, and trust infrastructure were deployed, the damage was done. An entire generation learned to navigate the world assuming everything might be fake.
The Silver Lining We Don’t Talk About Enough
The truth fatigue era forced innovations that make future big lies harder to sustain. By 2035:
Cryptographic proof of provenance became standard for important communications. AI systems developed to detect AI-generated content reached near-perfect accuracy. Reputation systems based on verified truth-telling created incentives for honesty. The legal system adapted to treat AI-generated evidence with appropriate skepticism.
But these solutions arrived after millions learned to live without truth as a shared reference point.
What 2025 Taught Us
Looking back from 2035, the lesson isn’t that people in 2025 were unusually gullible. It’s that they were the first generation where the cost of truth verification exceeded the benefit for most daily decisions.
When checking if something is real requires ten minutes of research, and scrolling to the next item takes one second, most people scroll. When believing comfortable lies has no immediate consequence but challenging them invites exhausting arguments, most people stop challenging.
The big lies of 2025 succeeded not because they were particularly clever, but because they landed in an ecosystem where truth was expensive and lies were free.
By 2035, we’ve built infrastructure that changes that equation. But we paid a heavy price to learn the lesson: In an age where anyone can create any reality, society must invest heavily in shared truth—or accept that truth becomes optional.
The decade between 2025 and 2035 will be remembered as the time we learned that lesson the hard way. Truth fatigue was real. The lies were real. And the cost of ignoring both was higher than anyone in 2025 could imagine.
Related Articles:
Deepfakes and the Crisis of Knowing – UNESCO analysis on synthetic media threats and the approaching “synthetic reality threshold”
AI-Driven Disinformation: Policy Recommendations for Democratic Resilience – Comprehensive research on the exponential growth of deepfakes and AI-enabled misinformation
AI-Pocalypse Now? Disinformation, AI, and the Super Election Year – Munich Security Conference analysis on “liar’s dividend” and AI’s role in the 2024 election cycle

