Ranking #1 on Google Used to Predict AI Citations. Now It Barely Does.
Seven months ago, 76% of pages cited in Google AI Overviews also ranked in the top 10 for the same query. By February 2026, that number had fallen to 38%. Some analyses put it as low as 17%.
That's not a marginal drift. That's a structural decoupling of traditional search rankings from AI citation behavior. Understanding why it happened changes how you approach AI visibility.
What changed: query fan-out
Traditional search is a matching problem. A user submits a query. The engine ranks pages by relevance. The top-ranked page wins visibility.
AI search is different. When a user submits a query to Google AI Mode, ChatGPT Browse, or Perplexity, the system doesn't look at the top result and stop there. It decomposes the query into multiple sub-queries -- sometimes five to ten of them -- runs retrievals for each independently, then synthesizes the results into a single answer.
Google has confirmed this behavior for AI Overviews and AI Mode. Perplexity's live retrieval architecture works the same way for every query. The mechanism is called query fan-out, and it's confirmed by Google's own documentation and observed in independent research on Perplexity and ChatGPT Browse.
Here's what it looks like in practice. A user asks: "best data management tools for small business." The AI's sub-queries might include:
- "data management software comparison 2026" - "small business data tools features" - "affordable data management solutions pricing" - "data management alternatives to Salesforce" - "what to look for in a data management platform"
A page optimized specifically for "best data management tools for small business" might rank #1 for the main query. But if it doesn't appear across the sub-query results -- if it answers only one angle of the question -- it can miss the citation entirely. Meanwhile, a collection of pages that each cover a different angle will get aggregated into the AI's answer.
The multiplier effect
In our May 2026 investigation of query fan-out behavior (drawing on an Ahrefs study of 863,000 keywords and 4 million AI Overview URLs), the directional pattern is clear.
Sites appearing in results for multiple fan-out sub-queries are 161% more likely to earn AI Overview citations than sites that rank only for the main keyword.
More counterintuitively: sites that rank only for fan-out sub-queries -- not the main query -- are 49% more likely to be cited than sites ranking only for the primary keyword.
A page that ranks for an adjacent sub-question, not the main keyword, is more likely to appear in the AI's answer than a page that ranks #1 for the question the user actually asked. The AI is assembling its response from the sub-query space, not from the top organic result.
Google's upgrade to Gemini 3 in January 2026 accelerated this. Query decomposition became more sophisticated, which means more sub-queries per user query, which means more openings for non-top-ranked pages to earn citations.
What our audit data shows
In our May 2026 analysis of Perplexity citation sources -- the first time we mapped every URL Perplexity cited in a live audit -- we found 102 source URLs across 12 query records (session 16, 2026-05-13).
When a client scores 0.0/10 on all four of our standard B-series queries (which test category-level visibility -- "best [category] tools," "best [category] software," "compare [category] options," "how do I choose"), the instinct is to conclude they're absent from the category entirely.
They may not be. Our four B-queries test the most common purchase-intent framing. Each of those queries fans out into five to ten sub-queries when the AI processes it. A client with a 0.0/10 B-score might still have partial presence in some of those sub-queries -- we're not measuring the full fan-out space with the standard query set.
This doesn't change what the fix looks like. Building content across multiple angles of your category (comparison, how-to-choose, feature guide, use-case specific) is correct. Getting listed in directories is correct. But it clarifies why these fixes work. Every comparison article, every directory category page, every use-case post is a potential source for some sub-query in the fan-out expansion.
The 102-source Perplexity analysis also showed which specific pages Perplexity actually reads when answering category queries: a small domain called aeoaudittool.com appeared in three of four B-queries at the same frequency as a well-funded enterprise competitor. The G2 category page appeared in three of four queries. These pages are what Perplexity synthesizes from when answering "what are the best [category] tools?" Getting into those pages -- not improving your own site's authority -- is the Perplexity category fix.
The content implication
The single-page, single-keyword model produces one citation candidate. In a fan-out world, that's one shot at appearing in one of many sub-query retrieval passes.
The businesses earning AI citations at scale have content that covers the topic from multiple angles: a comparison page, a how-to-choose guide, an industry-specific use case post, a pricing breakdown. Each page is a separate sub-query citation candidate.
This is not an argument for volume. The question to ask is: what are the distinct sub-questions someone might have before buying my service or choosing my tool? Build a page for each sub-question. Each page can appear in a different sub-query retrieval. Collectively they give the AI something to cite across multiple angles -- which is how the 161% multiplier accumulates.
A business with eight thin service-area pages and zero comparison content is essentially a single-angle citation candidate. A business with a comparison post, a how-to-choose post, and a use-case post for its most common client type is a three-angle candidate. The score difference, in an AI that's running ten sub-queries per question, compounds quickly.
One angle most brands aren't covering
Our May 2026 investigation of AI citation sources surfaced one data point that hasn't fully landed with most businesses: YouTube now accounts for 18.2% of all citations in Google AI Overviews -- up 34% in six months, and currently the single most-cited domain across AI Overviews.
Video content is functioning as a distinct sub-query citation surface. An instructional video -- "how to choose a data management platform for a mid-sized company" -- can appear in the how-to-choose sub-query while a comparison article appears in the comparison sub-query. Two different content formats covering two different angles, each contributing separately.
This matters because 41% of cited YouTube videos had under 1,000 views at time of citation (from the OtterlyAI 100M+ citation dataset, March 2026). Perplexity accounts for 38.7% of total YouTube citations across platforms. A well-described video does not need an audience to earn AI citations -- it needs topic clarity, an accurate description, and relevant metadata.
Most businesses are absent from this surface entirely. A 10-minute instructional video ("how to choose a [your service category] in [your city]") is an underutilized entry point into the sub-query space.
How to think about this practically
Fan-out doesn't change what you should build. Directories, schema markup, external citations, and content breadth remain the right fixes. But it changes how you evaluate effort.
The question shifts from "how do I rank #1 for my main keyword?" to "which sub-questions in my category do I currently have no citation surface for?" Answer that, and you know where to build next.
A client who asks "why doesn't the AI mention us?" often has a keyword ranking they're proud of. Sometimes the problem isn't that they've failed at search -- it's that they've succeeded at one angle while the AI is synthesizing from ten.
The Signal Check gives per-platform scores across brand, category, and service queries. Across those dimensions, you can see which type of query is failing and which platforms are the gap. That breakdown is the starting point for mapping the sub-query space you're absent from. If you want to start there before running a full audit, it's at sourcepull.ca.
See how your business scores on AI platforms.
Check your score — free