All posts
Tactical · 8 min read · 2026-04-24

How to Audit Your Own AI Search Visibility

Checking your AI search visibility isn't like checking your Google rankings. There's no dashboard, no position tracker, no weekly report. AI models don't publish a list of who they're recommending. You have to go look.

The good news: a useful self-audit takes less than an hour and will tell you exactly where you stand. Here's how to do it.

Start with test queries across multiple models

Open ChatGPT, Perplexity, Claude, and Gemini. You're going to run the same queries across all four, because each model has different retrieval behavior and citation tendencies. A business can be invisible to ChatGPT and well-cited by Perplexity, or vice versa.

For each model, run three query types:

**Category + city queries.** "Best [your service] in [your city]." "Top-rated [your service] near [your city]." This is the most common format for how real customers search via AI. Write down whether you appear, whether a competitor appears, and how confidently the model answers.

**Specific need queries.** "Who does [specific service] in [your city]." For a plumber: "Who fixes burst pipes in [city]." For a lawyer: "Find a family law attorney in [city] who handles custody disputes." These narrower queries often surface businesses that the broad queries miss.

**Entity recognition check.** Type your business name directly. Does the model know what you do, where you are, and what makes you different? Or does it confuse you with another business, give no information, or say it doesn't have data on you?

Document every result. The goal isn't a score — it's a pattern. Are you consistently absent? Occasionally mentioned but not recommended? Named with confident details, or named with vague or wrong information?

Check whether AI crawlers can access your site

Before any model can recommend you, it needs to be able to read your site. Open your robots.txt file (yourdomain.com/robots.txt) and look for any rules that block the following bots: GPTBot, PerplexityBot, ClaudeBot, Googlebot-Extended, CCBot, and OAI-SearchBot.

If you see `Disallow: /` under any of these user-agents, that crawler is blocked. GPTBot is the most commonly blocked by accident — website templates, privacy plugins, and security configurations added it to blocklists when it launched. Check every crawler individually rather than scanning for a single name.

If you run a WordPress site, check your SEO plugin settings too. Some plugins expose per-bot crawl controls in their AI/robots section, separate from the robots.txt file. Both places can block visibility.

Look for your llms.txt file

Check whether your domain has an llms.txt file (yourdomain.com/llms.txt). This is a plain text file that gives AI models structured context about your business — what you do, who you serve, your key service pages, your location.

If the URL returns a 404, you don't have one. Most businesses don't. This is one of the easiest signals to add. A basic llms.txt gives models a clear, authoritative description of your business in plain language, and it requires no technical expertise to create.

If you do have one, read it. Does it accurately describe what you do now? Is the service list current? Does it reference the cities you serve? Many llms.txt files are set up once and never updated.

Audit your entity consistency

Entity consistency is how well AI models can construct a reliable picture of your business from multiple independent sources. Pull up these profiles and compare them carefully:

Your Google Business Profile name, address, phone, and service categories. Your website's contact page name, address, and phone. Your Yelp listing. Your LinkedIn company page if you have one. Your top two or three industry-specific directories.

For each field, check: does it match? Not approximately — exactly. "Acme Plumbing" vs "Acme Plumbing Co." vs "Acme Plumbing & Drains" are three different entities in a model's view. Inconsistent names fragment your signal.

Also check what services are listed. If your website lists 12 services but your GBP lists three, AI models may not know the other nine exist. The more sources that confirm a specific service in a specific city, the higher your confidence signal for that offering.

Review your schema markup

Go to Google's Rich Results Test (or any structured data validator) and enter your homepage URL. Check whether LocalBusiness schema is present. If it is, look at what's inside.

Three things to verify specifically: First, is the schema type specific (Dentist, LegalService, Plumber, Contractor) rather than the generic LocalBusiness? Specific types are stronger retrieval signals. Second, does `areaServed` list the actual cities and regions where you work? If it's blank or just your address city, you're missing coverage for every surrounding area. Third, does `hasOfferCatalog` or `makesOffer` describe your specific services? Generic schema that says "plumbing" without listing what kind of plumbing doesn't help a model match you to a specific query.

If you don't have schema at all, that's a clear gap. If the schema is present but thin, the fix is usually an afternoon of work.

Read your own reviews for signal content

Pull up your most recent 20 Google reviews and read them like an AI model would. Models don't just read star ratings — they extract language about what your business does, where it does it, and what kind of client it serves.

Ask yourself: would a model reading these reviews know what specific services you offer? Would it know the cities and neighborhoods where you work? Would it understand the type of customer you serve?

A plumber with 40 reviews that all say "great service, very professional" is giving an AI model no useful signal. A plumber whose reviews say "fixed our main water line break in Burlington in the middle of January" is giving the model a lot. The difference isn't star rating — it's specificity.

You can't write your clients' reviews for them, but you can ask satisfied customers to mention what they had done and where. That's not gaming anything — it's just clear communication that happens to build useful signal.

What to do with what you find

Most self-audits produce one of three patterns.

You're absent everywhere. Your crawlers are blocked, or your schema is missing, or your entity profile is so thin that models can't construct a confident picture of your business. Fix crawl access first, then schema, then entity consistency. These are the structural foundations everything else rests on.

You're mentioned but not cited. Models know you exist but don't recommend you confidently. This usually means your signal exists but is incomplete or inconsistent. Run the entity consistency and review checks carefully — fragmented data is almost always the cause.

You're cited by one model but not others. This is common and often means your real-time crawlability is strong (which helps Perplexity) but your training-data presence is thin (which matters more for ChatGPT and Claude). Content depth and inbound links from recognized sources become the priority here.

A manual audit like this surfaces the pattern. It won't give you a numeric score or tell you which specific gaps are costing you the most — that's where a structured audit tool earns its value.

If you want a faster starting point, run a Signal Check on your domain. It runs the technical checks, entity validation, schema analysis, and AI query testing automatically and returns a scored report with a prioritized fix list. The free version covers the fundamentals. It takes about three minutes and tells you what a manual audit would take an hour to find.

Either way — do the audit. Most of your competitors haven't.

See how your business scores on AI platforms.

Check your score — free