By Futurist Thomas Frey
Search is dying. Not dramatically—gradually, messily, in ways that make finding information simultaneously easier and harder than it was five years ago.
Traditional search doesn’t work like it used to. Google’s AI Overview appears on top of search results with AI-generated summaries that have been found guilty of bias, hallucinated facts, and misquoted sources. The blue links you click are increasingly buried beneath AI-generated answers that may or may not be accurate. Google’s AI Overviews reduced organic click-through rates by an estimated 20-40%, meaning the websites that used to get traffic from search are starving while Google feeds you AI-summarized information without sending you anywhere.
AI search works better for some things—ask ChatGPT or Perplexity a question and you get direct answers instead of links. But it’s early, unreliable, and breaking the internet’s economic model in ways we haven’t figured out how to fix.
By 2040, search as we knew it will be unrecognizable. But the transition between now and then will be chaotic, economically destructive, and fundamentally change how information flows online.
The Current Mess
Right now, we’re in an awkward transition where multiple incompatible search paradigms compete:
Traditional search is degrading. As of 2025, approximately 175 trillion gigabytes of data is available online—and web experts estimate that at least 90% of it is environmentally-damaging digital trash. AI-generated content spam floods search results. SEO manipulation is more sophisticated than ever. Finding trustworthy information through traditional Google search feels harder than it did in 2020, even though the algorithms are supposedly better.
AI search is growing but flawed. ChatGPT’s share of general searches tripled from 4.1% to 12.5% between February and August 2025. People like getting direct answers instead of sorting through links. But AI hallucinates facts, can’t reliably cite sources, and collapses nuanced topics into oversimplified summaries. It’s convenient when it works and dangerously misleading when it doesn’t.
Users are fragmenting across platforms. Google’s share of general information searches dropped from 73% to 66.9% in six months. People now use different platforms for different searches—Google for one thing, ChatGPT for another, TikTok for something else, Reddit for product recommendations. There’s no longer one place to search; there’s a toolkit you assemble based on what you need.
The economic model is collapsing. AI search has broken the contract between search engines and content creators—scraping content for free while sending minimal referral traffic. Websites that used to get millions of visitors from Google search now get a fraction because AI answers questions directly without sending users to sources. Publishers are losing revenue while AI companies profit from their content.
Why Traditional Search Can’t Recover
The problems with traditional search aren’t fixable through better algorithms:
AI spam is winning. Websites generate thousands of AI-written articles optimized for search engines. The content is good enough to rank but not good enough to be useful. Traditional search can’t distinguish between human-created quality and AI-generated optimization garbage at scale.
Users expect answers, not links. Once you’ve experienced asking ChatGPT a question and getting a direct answer, going back to clicking through ten blue links feels inefficient. User expectations have shifted permanently—they want information, not websites.
The incentives are broken. Websites create content to get search traffic to sell ads or products. AI search provides information without sending traffic. So why create quality content if AI will just summarize it and keep users on AI platforms? The entire ecosystem that made search valuable is disintegrating.
Why AI Search Isn’t Ready
AI search has fundamental problems that won’t be solved quickly:
Accuracy isn’t reliable. AI hallucinates confidently. It presents wrong information with the same certainty as correct information. Users can’t easily verify without doing the research AI was supposed to eliminate. This makes AI search dangerous for anything important—medical questions, legal advice, financial decisions.
Attribution is terrible. AI search engines provide answers with citations instead of a list of links, but referral traffic has predictably plummeted. The citations are often incomplete, misleading, or simply wrong. You can’t easily check sources or dive deeper into topics.
Bias is embedded. AI models reflect the biases in their training data, the choices made by developers, and the incentives of companies deploying them. There’s no neutral AI search—all of it reflects particular perspectives and commercial interests that aren’t transparent to users.
Commercial corruption is inevitable. As AI search grows, companies will pay to influence what AI says about them. “Answer Engine Optimization” is already emerging—techniques to manipulate what AI systems recommend. The same commercial pressures that corrupted traditional search will corrupt AI search, just through different mechanisms.
The 2040 Search Landscape
By 2040, search won’t be one thing—it’ll be fragmented across multiple incompatible systems:
AI agents as primary interface. Most people won’t “search” at all—they’ll ask AI agents that know their preferences, history, and context. Your AI assistant will proactively find information you need before you ask. Search becomes invisible, automated, and personalized.
Specialized search for professionals. Doctors, lawyers, researchers, and other professionals will use specialized AI search systems trained on domain-specific data and held to higher accuracy standards. These won’t be free consumer products—they’ll be subscription services that professionals depend on for reliable information.
Social search dominates consumer decisions. For products, restaurants, services—anything subjective—people will rely on social platforms where real humans share experiences. TikTok, Reddit successors, and community platforms will be where people search for anything involving taste, quality, or trustworthiness.
Traditional search survives in niches. Google-style search will still exist for finding specific websites, navigating to known resources, and research requiring multiple sources. But it’ll be minority use case rather than default.
Walled gardens fragment knowledge. Information will be siloed in platforms that don’t share with each other. Your AI agent trained on data from Company A won’t have access to Company B’s knowledge. The open web of 2020 will be replaced by competitive knowledge filos operating as commercial assets.
Verification becomes premium service. Because AI-generated information is unreliable, services that verify accuracy and provide trusted sources will become valuable. Fact-checking won’t be free—it’ll be subscription service that people pay for when accuracy matters.
The economic model remains broken. We still won’t have solved how to compensate content creators when AI summarizes their work without sending traffic. Some publishers will charge AI companies for training data. Others will block AI entirely. Much quality content will disappear behind paywalls that AI can’t access, making AI search less useful while fragmenting information further.
What Gets Lost
The transition from traditional to AI search destroys things we haven’t figured out how to replace:
Serendipity. Traditional search showed you unexpected results that led to discovery. AI search gives you what you asked for but nothing you didn’t know to ask about. Exploration becomes harder as AI optimizes for efficiency.
Source evaluation. When you saw search results, you chose which sources to trust based on reputation. AI makes that choice for you, hiding the judgment process. Users lose ability to evaluate credibility for themselves.
Diverse perspectives. AI systems collapse multiple viewpoints into single answers, eliminating the diversity of thought that traditional search results revealed. Nuance and disagreement get flattened into algorithmic certainty.
Digital literacy. A generation growing up with AI search won’t learn how to research, evaluate sources, or construct knowledge from multiple inputs. They’ll know how to ask AI questions but not how to think critically about answers.
Final Thoughts
Search in 2040 won’t be better or worse than 2020—it’ll be completely different. Traditional search is dying because AI provides more direct answers. AI search is growing despite being unreliable because convenience beats accuracy for most questions.
The transition is breaking the economic model that made the open web valuable, fragmenting knowledge across incompatible platforms, and destroying the skills people developed for evaluating information quality.
We’re moving from a world where you searched to a world where AI agents find information for you automatically. That’s more efficient but less transparent, more personalized but more manipulable, more convenient but less trustworthy.
By 2040, most people won’t remember what search used to be—typing queries, clicking links, evaluating sources. They’ll just ask AI and trust whatever it tells them.
And we still won’t have solved the fundamental problem: how do we compensate the people creating knowledge that AI systems summarize and redistribute without attribution? Until we solve that, quality information will keep disappearing while AI search gets less useful even as it becomes more popular.
Search isn’t dying from better technology. It’s dying from economic and social forces that technology accelerated but can’t solve.
Related Stories:
https://www.ibm.com/think/news/ai-new-search-experience
https://onelittleweb.com/data-studies/ai-chatbots-vs-search-engines/

