All posts
Tactical · 6 min read · 2026-05-08

Why AI Still Won't Cite Your Business (After Fixing the Basics)

You've added LocalBusiness schema. You've cleaned up your directory listings. You've claimed and built out your Google Business Profile, written specific service pages, and even created an llms.txt file. And you're still not showing up when AI models answer questions about your category.

This is a real pattern we see after businesses complete the foundational work. The first layer of fixes is structural — schema, crawl access, NAP consistency — and those fixes are necessary. But they're not sufficient. There's a second layer of problems that emerges only after the basics are in place.

Here's what's actually stopping you.

You're being outcompeted, not excluded

The most common scenario we diagnose after the basics are done: you're not absent because of a technical gap. You're absent because a competitor is more thoroughly optimized, and AI models only have room for one or two confident recommendations per query.

AI citation isn't ranked the way search results are. When Perplexity or Claude answers "best accountant for small businesses in Kitchener," it picks one or two names and stops. If your competitor has stronger corroboration across directories, more specific review content, and a more complete GBP, it gets named and you don't — even if your structural setup is correct.

The diagnostic test: search for your business category and city and look at who is being cited. Then look at that competitor's schema, their directory presence, their GBP service list, and their reviews. Wherever they're more specific than you, that's the gap.

This isn't a technical fix — it's a depth problem. The solution is to be more thorough, not different.

Your schema is technically correct but semantically thin

Schema that passes a validator isn't the same as schema that performs. We see technically clean LocalBusiness schema in Signal Check audits where the business is still not being cited — because the schema is present but empty of useful signal.

The most common thin schema problems:

**No `areaServed`.** If your schema doesn't list the cities and regions you serve, AI models only know where you're located, not where you work. For service-area businesses especially, this means you match a single city when you should be matching twenty.

**Generic service names.** `hasOfferCatalog` with entries like "Plumbing" or "Legal Services" doesn't help a model match you to queries like "burst pipe repair in Burlington" or "family law mediation in Hamilton." Specific service names — phrased the way clients would ask — are what create those matches.

**No `aggregateRating`.** If you have Google reviews but haven't implemented AggregateRating schema, AI models can see your reviews through their crawlers but can't cleanly associate the rating data with your business entity. It's a small gap and a fixable one.

Fix the depth of your schema, not just the structure.

The query-service mismatch problem

This one is underdiagnosed. You're optimized for the services you think your clients search for, but AI queries often phrase things differently than you'd expect.

A kitchen renovation company that built all its pages around "kitchen renovations" may be missing queries like "who does kitchen remodeling in [city]," "custom kitchen cabinets," or "open concept kitchen conversion." The language on your pages determines which queries you match — not the service itself.

The fix: run test queries on Perplexity and ChatGPT and note the exact phrasing that surfaces competitors. Ask your actual clients how they described the problem before they knew your industry's terminology. Add those variations to your service page copy and, where relevant, your FAQ schema.

This isn't keyword stuffing. It's closing the gap between how clients think about the problem and how you describe the solution.

Training data lag vs. live retrieval gap

If you've made structural fixes recently — in the last few months — there may be a timing issue explaining why some platforms cite you and others don't.

Perplexity crawls the web live on every query. Changes you made last week can affect Perplexity results within days. If you're showing up on Perplexity but not on ChatGPT or Claude, the structural work is probably correct — those platforms are drawing from older training data that predates your fixes.

ChatGPT and Claude improve their business-specific retrieval through a mix of training data and real-time web access. Training data updates on a slower cycle, so a business that built its schema and directory presence six months ago may not see improvement in ChatGPT retrieval until the next model update.

There's nothing you can do to accelerate a training data cycle. But understanding this lag tells you why you should check each platform individually rather than treating AI visibility as a single metric.

The confidence threshold you haven't crossed yet

AI models apply an implicit confidence filter to local business recommendations. If the aggregated signals for your business are strong enough, you get cited. If they're good but just short of the threshold, you get omitted — even if every individual signal is technically correct.

Getting over that threshold usually comes down to corroboration: how many independent, credible sources describe your business consistently and specifically. Your schema is one source. Your GBP is one source. Your Yelp listing, your HomeStars profile, your BBB entry, your Houzz portfolio — each one is an additional data point that raises the confidence score.

We find in Signal Check audits that businesses stuck below the citation threshold are typically present in two or three directories but missing from six or seven that matter in their category. Filling those gaps is usually the difference between being mentioned and being cited.

Add the directories. Fill them out with specific service descriptions and the cities you serve. Update the ones with stale or generic data. It's methodical work, not glamorous, but it compounds.

How to know which of these is your problem

Run a structured diagnostic across all four major AI platforms: ChatGPT, Perplexity, Claude, and Gemini. Ask the same category-plus-city query on each. Note whether you're cited, mentioned without a recommendation, or absent entirely.

The pattern tells you where to focus. Present on Perplexity but absent on ChatGPT: training data lag or authority gap. Present on Gemini but not others: your Google ecosystem is strong but your broader web presence is thin. Absent everywhere after fixing the basics: you're below the confidence threshold — go deeper on corroboration and query-service alignment.

If you want a structured version of this diagnosis, a Signal Check at sourcepull.ca runs it across all four platforms automatically, flags where you're showing up and where you're not, and returns a scored breakdown by signal type. For businesses that have done the basics and still have a gap, it usually surfaces the specific layer that's holding you back.

The basics are necessary. They're just not the finish line.

See how your business scores on AI platforms.

Check your score — free