AI Recognizes Your Brand. That's Not the Same as Recommending You.
Most AI visibility advice assumes you have one goal: get AI to know your business exists and describe it accurately. Schema markup, Wikidata entries, entity disambiguation -- all of this addresses one question: does AI recognize my brand?
That question matters. But there is a second question that drives actual customer referrals, and it requires completely different answers: does AI recommend my business when someone is shopping for what I sell?
These are not the same problem. We track them as separate dimensions in every audit we run. The gap between them is wider than most business owners expect.
Two types of queries, two different AI problems
When we audit a business, we run queries across all four AI platforms in distinct categories designed to separate recognition from recommendation.
Brand queries are direct: "What is [business name]?" "What does [business name] do?" "Tell me about [business name]." A business with any reasonable web presence scores reasonably here. AI knows the name, has training data or live retrieval context, and can produce a description.
Category queries are different in shape and intent: "Best [your service] in [your city]." "Top-rated [your category] for [use case]." "What's the best [product type] for [customer situation]." These are the queries a potential customer runs before they know which business to contact. Your brand name does not appear in the question. AI is being asked to recommend from an undifferentiated field.
A high score on brand queries means AI knows who you are. A high score on category queries means AI recommends you to people who have never heard of you. These require different signals, different infrastructure, and different fixes.
What the Postiz audit showed us
In our May 2026 product analysis (methodology rec, 2026-05-11), we examined this split across a set of recent audits and documented one case in detail.
A SaaS company -- Postiz, audit SP-0501-0002 -- came in with Brand Authority 6.7/10. AI models knew what the product was. They could describe it accurately, categorize it correctly, produce the right kind of answer to direct brand queries. By the standard definition of "AI visibility," Postiz had meaningful presence.
Category Authority: 0.0/10. Across all four platforms, across all category-level queries we ran, Postiz did not appear once.
The implication the rec documents: "your business does not appear in ANY commercial intent query." The score summary correctly identified Category Authority as the highest-leverage gap, because category queries are where purchasing decisions originate. Someone asking "what's the best social media scheduling tool" is ready to spend money. Someone asking "what is Postiz" already knows the product name -- they're likely already a customer or comparing after discovery through another channel.
The score math compounds this. In our v3.1 scoring formula, Category Authority carries 20% of the aggregate weight. A business with Brand 7.0 and Category 0.0 still scores 5.6 overall -- which maps to "above average" in our rubric. The headline looks fine. The commercial reality is: this business is absent from every AI-driven discovery query in its market.
Why the Perplexity dependency chain matters
Our April 2026 citation research (knowledge file, perplexity-citation-triggers.md, 2026-04-22) documented a pattern that explains why category scores are so often zero: the fix stack that addresses brand recognition has almost no effect on category performance.
For Perplexity, which uses live web retrieval on every query, the dependency chain for category citations is explicit. Category queries ("best web designer in Burlington") pull from whatever Perplexity retrieves when it searches for those terms live. That retrieval returns the same pages that rank in Google organic search for those queries. The chain: Google organic ranking for category keywords -> Perplexity category citation.
Schema on your own site does not change this. An llms.txt file does not change this. An llms.txt file tells crawlers what they are allowed to index; it says nothing about where you rank in the competitive set Perplexity retrieves when someone asks for your category.
Our April 2026 notes document this explicitly: "Every client in our audit set scored 0.0/10 on Category Authority (B-series) across all platforms... A business with no Google ranking for 'web designer Burlington' will not appear in Perplexity's B-series either." That was across all clients in the initial audit set. Not some of them. All of them.
For ChatGPT, the category dependency runs through a different channel: approximately 49% of ChatGPT's business citations come from third-party directories -- Yelp, BBB, G2, Capterra, industry registries. A business absent from those directories is absent from ChatGPT category responses regardless of how well-structured its own website is. The website is optimized for Gemini, which pulls 52% of its citations from brand-owned sources. ChatGPT doesn't care. It's pulling from the structured listings ecosystem.
This is why the standard AEO fix list -- Organization schema, sameAs links, entity disambiguation -- primarily addresses brand recognition. It is the right answer to the wrong question if category visibility is the actual goal.
When brand recognition itself fails
There is a more severe version of this problem: brand queries failing before category queries become relevant at all.
In our May 2026 audit of Jupitrr, a video AI platform (edge case report, 2026-05-02), Perplexity's query pipeline intercepted the brand name before entity lookup ran. Perplexity's spell-correction layer treated "Jupitrr" as a phonetic variant of "Jupiter" -- the planet -- a high-confidence, high-corpus-presence entity. Perplexity A1 and A2 (direct brand awareness queries) returned information about the solar system.
The brand query never resolved to the business. Adding schema or a Wikidata entry does not fix this. The fix is building enough external indexed presence -- consistent use of "Jupitrr" as a proper noun across Product Hunt, G2, Crunchbase, press coverage -- that the platform's resolver accumulates sufficient signal to override the spell-correction. Until that threshold is crossed, Perplexity users searching for the brand are reading about planetary gas giants.
This is the floor problem. You cannot have category authority if brand recognition fails. And brand recognition can fail for non-obvious reasons: not because you have thin schema, but because your brand name resembles a well-known entity and your external citation footprint is too small to outcompete it.
The Jupitrr case was not a unique misfortune. Any brand name with unusual spelling, phonetic similarity to a common noun, or near-zero external indexed presence is at risk of the same intercept.
The fix sequence for category authority
The path from zero category authority to genuine AI recommendation visibility runs in a specific order.
Confirm brand queries work first. Run direct brand queries on Perplexity and ChatGPT. If you are getting accurate, specific descriptions of your business -- right category, right city, right service description -- brand recognition is established. If you are getting confabulation, entity confusion, or generic "no results" responses, that is the starting problem.
Build the external citation layer. For local service businesses: Google Business Profile (which feeds directly into AI local recommendation behavior), Yelp, and category-specific directories (Houzz for home services, Avvo for legal, Healthgrades for healthcare). For SaaS: G2, Capterra, Product Hunt, and Crunchbase. The goal is consistent use of your brand name as a proper noun across high-authority external domains -- not just directory listings for their own sake, but the specific platforms AI retrieval architecture pulls from.
Check Google organic visibility for category terms. If Google does not rank your site in organic results for your primary category terms and city, Perplexity will not cite you in category queries. This is not an AI-specific fix. It is acknowledging that AI recommendation visibility is partially downstream of search visibility, and that improving one feeds the other.
The entity disambiguation work -- schema, sameAs links, Wikidata -- belongs in the sequence, but at the brand recognition layer, not the category authority layer. Fix it so AI knows who you are. Then build the external infrastructure that makes AI recommend you.
Where to check both dimensions
The brand/category split is not visible in a single AI platform spot-check. Asking ChatGPT your brand name tells you one thing. Running ten category queries across four platforms tells you something entirely different.
Sourcepull's Signal Check scores both dimensions separately across ChatGPT, Perplexity, Gemini, and Claude -- so you can see exactly which problem you're dealing with before deciding what to fix. It runs in about 60 seconds and doesn't require an account.
See how your business scores on AI platforms.
Check your score — free