How Search Becomes Less Important (Part 2)
As AI embeds answers directly in our apps and workflows, the future of information access might not involve searching at all
In a world where AI has suffused our apps and interfaces, what happens to search? With Google's search advertising alone generating $187 billion annually, this question has sparked a new battle for dominance, with Microsoft, OpenAI, Meta, and Perplexity all entering the fray.
If you missed it last week, we explored this question from the perspective that Google will win. After all, they have distribution locked up, and the compute and data advantages in AI accrue to them as an incumbent. Their methodical integration of AI features and deeply entrenched user habits suggest their position is secure.
But what if that perspective is wrong? What if the real issue isn't who wins search, but that the search behavior that's driven Google's ascendance may change? This is the question we'll explore this week.
The Challenge of Seeing What's Missing
When I first entered the workforce in the late 1990s, the office supply closet had bottles of Liquid Paper. That iconic white correction fluid, invented by Bette Nesmith Graham in 1956, was once as essential to office work as staplers or paper clips.
But Liquid Paper was a symptom of a problem. When we reached for that white bottle, we weren't really seeking correction fluid; we were dealing with the limitations of typewriters and printed documents. Word processing didn't win by building better correction fluid. It won by eliminating the need for corrections in the first place.
Today, I see the same pattern emerging with search. Google processes over 100,000 searches every second – that's 8.5 billion daily queries from people actively stopping their work to look something up. But just like Liquid Paper, each of these searches is a symptom of a deeper problem: information isn't where we need it, when we need it.
Think about how spell-check transformed our relationship with dictionaries. We didn't stop needing correct spelling; we just stopped having to search for it. The information became embedded in our workflow. Now imagine that same transformation happening across every domain where we currently rely on search.
These kinds of transformative changes are often invisible until they've already happened. In 1979, when Gillette was buying Liquid Paper for millions, few could imagine a world where correction fluid would become obsolete. The same blindness affects us today when we think about search. Nobel laureate Daniel Kahneman captured this perfectly: "The idea that what you don't see might refute everything you believe just doesn't occur to us."
Like those bottles of White-out gathering dust in supply closets, the very act of searching might be facing a similar transformation. Not because someone built a better search engine, but because the workflows that require active searching might be changing as LLMs and smarter applications begin to anticipate our information needs.
The Evolution of Information Access
I first noticed this shift in my own work. A year or two ago, I'd instinctively open a new tab and Google every coding question I had, usually ending up on Stack Overflow. These micro-interruptions were so common I barely noticed them – maybe 20 or 30 searches per day just for coding questions. Now, with AI coding assistants in my IDE, those searches have almost vanished. The answers appear inline as I work, no context-switching required.
This isn't just my experience. The graph above shows Stack Overflow's traffic dropping dramatically after ChatGPT's release, from around 20 million monthly views down toward 10 million. It's a striking visualization of how quickly established patterns can change – not because developers have fewer questions, but because those questions are being answered before we need to search.
Back in 2002, when Andrei Broder at AltaVista published the first peer-reviewed paper on search classification, he identified three main types:
Navigational: Finding specific websites (like "YouTube login")
Informational: Seeking knowledge (like "GDP of United States")
Transactional: Completing actions (like "order pizza online")
Google later refined this into their "Do-Know-Go" framework, adding a crucial category called "Know Simple" – queries that could be answered in less than two sentences with uncontroversial answers.
Studies suggest that 50-80% of all searches are informational. These "Know" queries are the most vulnerable to being absorbed into workflows. When your calendar automatically shows the weather forecast for your meeting location, or your smart home app anticipates common maintenance issues, those searches never happen.
This shift is already reshaping the information landscape. While Google still processes an astounding 3.1 trillion searches annually, we're seeing a rapid rise in alternative ways of accessing information. These new tools aren't just providing better search results – they're eliminating the need for certain types of searches entirely. And as we'll see, this pattern is repeating across every major category of search intent.
From Search to Seamless Assistance
As these changes accumulate across different types of searches, they point to a more fundamental shift in our relationship with computers. To understand where this is heading, consider two contrasting models of human-computer interaction.
Traditional computing has been like operating a machine in a factory – we stand across from our devices, issuing explicit commands, stopping our work whenever we need to search for information or tools.
But a new model is emerging, one that's more like a surgical team in an operating room. Just as a scrub nurse anticipates the surgeon's needs, handing instruments before they're requested, AI is beginning to anticipate our information needs, providing answers before we search.
If this looks familiar, it should. In the 1990s, Microsoft tried something similar with Clippy, that eager paper clip assistant who would pop up (just as he does above): "I see you're writing a letter. Would you like help?" It was the right idea at the wrong time. Like a surgical resident who keeps handing the wrong instruments, Clippy couldn't truly understand context. As Microsoft employee Chris Pratley noted, Clippy was "optimized for first use" – great for newcomers, frustrating for everyone else.
Today's AI makes this assistant model finally viable. The same pattern is playing out across different fields. Take design work, for example. Just a year ago, designers would spend hours searching through asset libraries and stock photos, constantly context-switching between their design tool and various search interfaces.
Now tools like Figma are embedding AI directly into the design environment. Instead of leaving their workspace to search for assets or reference materials, designers get suggestions right where they work. The search isn't just better – it's disappearing into the workflow itself.
This pattern repeats in other domains. Customer support teams are experiencing a similar shift: instead of agents searching through knowledge bases, AI surfaces relevant information during conversations.
The New Information Access Model
We've seen how AI is eliminating the need for traditional search by embedding information directly into our workflows. Now let's examine how this transformation affects each of the major search categories that have dominated the internet era:
Navigational Search
Remember when finding a website meant opening a browser and typing the URL? Today, operating systems have absorbed this navigational function. When I type "youtube login" into Spotlight on my Mac, it instantly provides the direct link to YouTube's sign-in page - no need to open a browser, search Google, and click through results.
This absorption of navigational search into the operating system is accelerating. Safari on iOS now bypasses Google for many common destinations, providing direct links through the OS. Microsoft's Windows Copilot takes this further, understanding complex commands like "open my last PowerPoint presentation" – tasks that once required multiple explicit searches through folders or browser history.
Transactional Search
Think about the last time you searched for an Airbnb. You probably opened dozens of listings in separate tabs, trying to figure out which places were actually walking distance to the city center, or had that perfect outdoor space for morning coffee. Each listing shows the same standardized set of attributes – number of bedrooms, bathrooms, amenities – leaving you to do the mental work of comparing what really matters to you. These aren't just searches – they're complex interface challenges that force us to adapt to each platform's way of organizing information.
This is where AI is can change transactional search. Instead of forcing users to learn different interfaces, AI can create personalized, just-in-time interfaces based on natural conversation and what it already knows about us.
We're seeing early glimpses of this future in tools like Claude's "Artifacts" feature, which creates a split-screen view where the AI can generate dynamic interface elements during conversation. While today it's primarily used for data visualization and simple programs, imagine how this could transform shopping: "I need a temperature-controlled kettle that works with my iPhone" could instantly generate a custom comparison table with exactly the features you care about, filtered for your budget and ecosystem preferences.
OpenAI's Canvas feature points in a similar direction – instead of learning different interfaces for different tasks, users can have a conversation with AI that generates exactly the interface elements needed for that specific moment. The traditional pattern of search-then-navigate is collapsing into a single, personalized interaction where the interface adapts to you, rather than you adapting to it.
Informational Queries
The largest category and the one most prime for disruption are informational queries. When we search for answers, we're not looking for ten blue links and the work of figuring it out ourselves. Google has been working on this challenge for over a decade, starting with their Knowledge Graph in 2012, which marked their first shift from "strings to things" - moving beyond simple keyword matching to understanding entities and their relationships.
The real breakthrough came in 2013 with the Hummingbird update, which introduced semantic search capabilities to better understand the meaning behind queries rather than just matching keywords. This was followed by RankBrain in 2015, which used machine learning to better understand search intent.
But the most significant leap forward came in 2019 with BERT (Bidirectional Encoder Representations from Transformers). Transformers, the AI architecture behind BERT, revolutionized language understanding by analyzing words in relation to all other words in a sentence, rather than processing them in order. This allowed Google to grasp the nuanced context of search queries in ways that weren't possible before.
Initially, Google focused on "Know Simple" queries - straightforward questions with unambiguous answers, like historical dates or basic facts:
But now, with Large Language Models, Google can tackle much more complex informational queries. Their new AI Overview features don't just provide basic facts - they can synthesize step-by-step instructions, explain complex concepts, and even troubleshoot technical problems that previously would have required reading through multiple help documents or forum posts.
This trend toward comprehensive in-SERP answers is only accelerating. According to SparkToro's latest research, nearly 60% of Google searches are now "zero-click" searches - meaning users find their answer directly in the search results without ever clicking through to an external website. With the introduction of AI Overviews, even more complex queries can be answered directly in the search results, further reducing the need for users to visit individual websites to find the information they need.
The Future of Information Access
As we've seen throughout this article, the transformation of search isn't just about building better search engines – it's about changing how we access information. Just as spell-check transformed our relationship with dictionaries, AI is beginning to transform our relationship with search.
The signs are already visible. Notion embeds AI research capabilities directly into writing workflows. GitHub Copilot answers coding questions without leaving the IDE. These aren't just convenient features – they're early indicators of a profound shift away from the "stop-and-search" pattern that has dominated knowledge work for the past decade.
But this shift raises questions about the future of information access:
What happens to the web's content ecosystem when traffic patterns change?
How will content creators survive when AI answers reduce website visits and ad revenue?
Which types of searches will we still do ourselves, and which will become automatic?
Most importantly, who gets to control how we access information when it's built directly into our apps and tools?
These aren't just abstract problems. They're questions that could reshape how we work, learn, and find information online. Next week, we'll explore these challenges and examine how they might transform not just search, but the web ecosystem that has grown around it.
Want to understand how this transformation might affect your industry? Subscribe to our newsletter for weekly analysis of how AI is reshaping information access and knowledge work. Next week's deep dive into the economics and ecosystem effects of ambient search will be particularly relevant for anyone building or investing in information-centric products.