When AI Gives Your Business Three Different Identities
In a Scout investigation dated 2026-04-27, we flagged something unusual while auditing a personal brand. Gemini produced three completely different identities for the same person in a single session.
Query A1 ("Who is [name]?") returned an American poet and educator at a northeastern US university. Query A3 ("Tell me about [name]") returned a UX/UI designer. A service query returned a real estate developer at a large private firm. None of them were correct. None were the same.
This isn't a glitch. It's a documented failure mode with a specific name and a specific cause. Understanding it is one of the most practically useful things a business owner can know about how AI models decide what's real.
Why the same name produces three different people
AI models don't look up your business from a single verified database. They reconstruct a picture from accumulated signals — training data, live web retrieval, structured data, directory references — and generate a response from whatever picture they can assemble.
When signals are consistent and dense, the model builds a coherent picture. When they're sparse or contradictory, it fills gaps probabilistically. And filling gaps on a low-signal name means different queries trigger different gap-filling.
The mechanism: at what researchers call the probabilistic resolution stage, Gemini re-runs entity disambiguation independently per query rather than caching the result across a session. Each query triggers a fresh selection from whatever low-frequency signals are available. When those signals are thin, a different "entity" surfaces each time — same name, different invented biography.
This is why the fix isn't "add schema." Schema that says three slightly different versions of your business name creates three competing signals, not one strong one. The model picks among them inconsistently.
When your brand name gets autocorrected away
Scout's May 2, 2026 edge-case investigation documented a related but distinct failure: a software product with a deliberately unusual spelling had its brand queries on Perplexity return results about an entirely different, well-known entity with a similar name.
Perplexity's query normalization layer treated the brand name as a misspelling of the more famous entity — one with massive training corpus presence — and the brand query never reached entity disambiguation at all. The spell-correction layer intercepted it first.
This failure is in some ways more severe than fragmentation. With fragmentation, the model at least attempts to describe your business and invents a wrong one. With autocorrect interception, your brand simply doesn't exist in the model's vocabulary. Every direct brand query gets consumed by a more familiar entity before it can resolve.
The threshold at which Perplexity's spell-correction stops intercepting is correlated with external citation volume — specifically, the number of high-authority domains that use your brand name consistently as a proper noun. Until enough indexed sources establish "this is a brand name, not a misspelling," the interception continues. Our investigation found no hard citation count for this threshold.
The naming inconsistency that causes fragmentation
The 2026-04-30 update to our schema markup research confirmed the specific mechanism. Entity fragmentation is almost never about whether schema is present. It's about whether every source that describes your business uses identical entity language.
If your Organization schema `name` field says "Acme Plumbing & Heating," your GBP says "Acme Plumbing," your Yelp listing says "Acme Plumbing and Heating Co.," and an old press release says "Acme Plumbing Services" — you haven't built four strong signals for one entity. You've built four separate entities, each with some weight but none with dominant authority.
The model then picks inconsistently, or worse, fragments your business across those variants. One practitioner source in our NAP consistency research put it directly: "20 perfectly consistent citations outperforms 100 inconsistent ones." The Jason Steltman finding — three different people from three queries — is what 100 inconsistent citations looks like in practice.
Pick a canonical business name. Use it exactly, everywhere. The legal name on your incorporation documents doesn't have to be the canonical name you use for AI visibility purposes — but whatever you choose needs to be identical across every profile, listing, schema block, and page title that names your business.
What entity cluster density actually means
The fix for fragmentation involves what researchers call entity cluster density — multiple high-authority sources all pointing to the same, identically-named entity.
A `sameAs` property in your schema pointing to Wikidata, Crunchbase, and LinkedIn doesn't work through magic of the JSON-LD property. The value comes from having actual verified entries at those URLs, each consistently naming your business, giving AI models multiple traversal paths to the same entity record. Three confirmed traversal paths to one entity beats twelve inconsistent descriptions of variations on your name.
For businesses with the Gemini multi-identity problem specifically, Wikidata's `P31` (instance of) field carries disproportionate weight. A 2025 arXiv paper (2505.02737) confirmed that proper knowledge graph class-subclass taxonomies directly improve zero-shot disambiguation. The type value — whether you're a LocalBusiness, a LegalService, a Person — tells the model what category of entity it's resolving to, which constrains its probabilistic options considerably.
For a business, the minimum viable Wikidata entry includes: correct `instance of` value (use the most specific type available, not just Q4830453/business), your city and country, your official URL, and links to your GBP and LinkedIn where possible. The entry itself takes about 30 minutes to create if you've never made one.
The geographic fragmentation case
Entity fragmentation applies to geography too. In our April 28, 2026 edge-case investigation, Perplexity B-series category queries for a Burlington, Ontario client returned competitive results for Burlington, Vermont.
Without an explicit province qualifier, Perplexity's geo-disambiguation defaulted to Burlington VT — which has a longer-established digital footprint, including more indexed professional service content, despite Burlington Ontario having four times the population. The Ontario city lost the disambiguation.
This matters for any Canadian business in a city that shares a name with a US counterpart: Cambridge, London, Windsor, Kingston, Hamilton. A B-series category query that appears to measure your AI visibility in your own city may be measuring a different city entirely. A Signal Check audit that doesn't apply provincial qualifiers to queries is returning Burlington Vermont data for a Burlington Ontario business.
The fix in your structured data and directory profiles: always use the city-province format. "Burlington, Ontario" or "Burlington, ON, Canada" — not just "Burlington." Every instance of your location across every source should include the provincial qualifier. This gives geo-disambiguation a clear, unambiguous signal.
How to check whether you have this problem
The diagnostic is quick. Open Gemini or ChatGPT and ask your business name three ways in the same session: "Who is [business]?", "Tell me about [business]", and "What does [business] do?"
If you get three consistent descriptions, your entity signal is stable enough. If you get different descriptions — or one of them returns a different business or person entirely — you have fragmentation.
For unusual brand spellings or names that resemble common words, test whether the platform attempts to answer about your brand or redirects to something familiar. That's the autocorrect interception check. If it redirects, your external citation presence isn't high enough to establish your name as a known proper noun.
The order of fixes when fragmentation is present: standardize your canonical name across every source (this takes time and an audit of your directory presence), create or update your Wikidata entry with correct P31 type values, and build consistent citation presence on four or five high-authority directories before pursuing volume.
If your Signal Check shows inconsistent citation text — where different platforms describe your business differently, or where direct brand queries return low confidence — that's fragmentation showing up in measurement. The fix is entity anchoring, not more schema.
Run a Signal Check at sourcepull.ca. The citation text your business receives across platforms often reveals whether you have a fragmentation problem before you even need to test manually.
See how your business scores on AI platforms.
Check your score — free