We Built the Most Powerful Truth-Distribution System in History and Then Filled It With Lies
By Futurist Thomas Frey
The Oldest Weapon, Newly Armed
Disinformation is not a product of the digital age. Julius Caesar’s enemies spread rumors about his health to undermine his authority. Medieval monarchs commissioned forged papal documents to legitimize land grabs. World War II intelligence operations ran elaborate deception campaigns that altered the course of military history. Human beings have been manufacturing false reality as a tool of power for as long as they have competed for it.
What is new is the infrastructure. We have built a global nervous system capable of transmitting a message to three billion people in under an hour, with no editorial gatekeeping, no verification requirement, and an algorithmic reward structure that systematically favors content provoking outrage over content conveying accuracy. We did not build this system to spread lies. We built it to connect humanity. The fact that it turbocharges deception with equal efficiency is not a design flaw anyone intended — but it is a design flaw everyone must now reckon with.
And then we handed it artificial intelligence.
The Taxonomy of Lies
Before we can fight the disinformation engine, we need to understand what we are actually fighting. Disinformation is not a monolith. It arrives in distinct forms, each with its own logic, its own source, and its own method of infection.
The first and most familiar is political disinformation — false or misleading narratives deliberately spread to shape electoral outcomes, delegitimize opponents, or manufacture consent for policies. During the 2016 U.S. presidential election, the Internet Research Agency in St. Petersburg operated hundreds of fake American social media accounts, running Black Lives Matter groups and Second Amendment rallies simultaneously — not to advance either cause, but to deepen division and exhaust trust. The goal was never to win an argument. It was to make argument itself feel pointless.
The second form is corporate disinformation — competitor marketing disguised as neutral information. The tobacco industry’s decades-long campaign to manufacture scientific doubt about smoking’s health effects is the canonical example, but the playbook has been applied to climate science, pharmaceutical side effects, and nutritional research. When the sugar industry funded Harvard research in the 1960s that shifted blame for heart disease from sugar to fat, it shaped American dietary policy for a generation. Nobody stood up at a podium and lied. The lie was in the funding structure, invisible to almost everyone reading the studies.
The third form is ideological disinformation — false information that spreads not because someone is paid to spread it, but because it confirms what a community already wants to believe. The anti-vaccine movement did not originate with a well-funded disinformation campaign. It began with a fraudulent 1998 study by Andrew Wakefield, who had been paid by lawyers preparing litigation against vaccine manufacturers — a conflict of interest he concealed. The study was retracted. His medical license was revoked. The lie outlived both events and is still killing children through preventable diseases.
The fourth form, and the one most urgently relevant right now, is synthetic disinformation — false content that did not originate with a human witness misrepresenting reality, but was manufactured wholesale from nothing.

The Synthetic Turn
In April 2023, a fabricated image of an explosion near the Pentagon went viral on Twitter. The stock market briefly dropped. There was no explosion. There had been no witness, no photographer, no incident. There was only a generative AI model and someone with access to it. The entire event — the image, the virality, the market reaction — took less than forty minutes from first post to debunking. The damage, diffuse and psychological, was harder to measure and longer to reverse.
This is the inflection point we have crossed. For most of human history, disinformation required at minimum a willing human to lie, and the lie was constrained by what that human could plausibly claim to have witnessed or documented. Synthetic media removes that constraint entirely. A deepfake video now takes hours, not months, to produce convincingly. Audio cloning of a voice requires samples as short as three seconds. Large language models can generate thousands of unique, locally customized disinformation articles — calibrated for specific regional dialects, referencing local politicians, naming real streets — faster than any human fact-checking operation can respond.
In the 2024 Slovak parliamentary elections, an audio recording purportedly of a liberal candidate discussing how to rig the election circulated two days before the vote — precisely timed to fall within the legally mandated pre-election media blackout, when official channels couldn’t respond. The candidate lost. Slovak investigators later concluded the audio was fabricated. The timing was not an accident. It was a feature.
How AI Will Rewire the Disinformation Wars
What happens next is not simply that disinformation gets more convincing. It gets more personalized, more scalable, and more precisely targeted — and those three characteristics together represent something qualitatively different from anything we have dealt with before.
Personalized disinformation means a false narrative tailored not just to a demographic, but to an individual. Your browsing history, your social connections, your known political sensitivities, your stated fears — all of this data already exists and is already being aggregated. AI systems can use it to craft a disinformation message that reads as if written specifically for you, referencing the things you care about, framing the lie in the language most likely to bypass your skepticism. This is not speculative. The targeting infrastructure exists. The generative AI to exploit it exists. The integration of the two is the near-term threat.
Scalable disinformation means the economics of lying have collapsed. A nation-state disinformation operation in 2010 required hundreds of human operators, significant infrastructure, and months of preparation. An equivalent operation today requires a moderately skilled prompt engineer, a commercial AI subscription, and a weekend. The barrier to entry for industrial-scale deception is now roughly equivalent to the barrier to entry for a Substack newsletter.
And precise disinformation means false narratives can be injected at exactly the right moment in exactly the right information environment to cause maximum disruption — before an election, during a market-moving announcement, at the peak of a public health crisis — with AI models optimizing for timing and distribution the way advertising platforms optimize for click-through rates.

The Hard Question We Keep Avoiding
Here is the question that our institutions have not yet found the courage to answer directly: when detection is harder than creation, when false content can be produced a thousand times faster than it can be verified, and when the platforms distributing it are economically incentivized by engagement rather than accuracy — what does a functioning information ecosystem actually look like?
Provenance technology — cryptographic signatures embedded in authentic media at the point of creation — is the most promising structural answer. The Content Authenticity Initiative, backed by Adobe, the BBC, Microsoft, and others, is building exactly this. The idea is that authentic video, audio, and photographs carry a verifiable chain of custody from creation to consumption, making synthetic content identifiable not by how it looks, but by what it lacks.
AI-powered detection tools are improving rapidly, though they are in a permanent arms race with the generative systems they are trying to flag. Digital literacy education at scale, particularly for older adults who were not raised in environments requiring skeptical media consumption, is underinvested and critically necessary. And regulatory frameworks that hold platforms accountable for the amplification — not just the creation — of synthetic disinformation are beginning to emerge in the European Union, though enforcement remains nascent.
None of these are sufficient alone. All of them are necessary together. The disinformation engine is not going to be switched off. It is going to be outbuilt, out-educated, and eventually outpaced by an equally sophisticated infrastructure of verification.
The question is whether we build that infrastructure before the next election, the next pandemic, or the next manufactured crisis that turns out to be real.
Related Articles
MIT Technology Review — The Disinformation Age: How AI Is Making Fake News Worse https://www.technologyreview.com/2023/06/28/1075504/disinformation-age-ai-fake-news
Stanford Internet Observatory — Roundup of AI-Enabled Influence Operations in 2024 https://cyber.fsi.stanford.edu/io/news/roundup-ai-enabled-influence-operations
Reuters Institute for the Study of Journalism — Journalism, Media and Technology Trends and Predictions 2025 https://reutersinstitute.politics.ox.ac.uk/journalism-media-and-technology-trends-and-predictions-2025
