SourcepullSourcepull
How it worksWhat you getPricingFAQ
How it worksWhat you getPricingFAQ
Field Research

Research
notebook

Field studies, calibration data, and methodology notes from running AEO audits on real businesses. Published in public, including the parts where we discovered our own work was wrong.

Entries
02
Latest
2026-04-25
Method
Live signal-checks against
ChatGPT · Perplexity ·
Gemini · Claude
N°012026-04-25
Field Study6 min read

We Ran Our AI Visibility Audit On Eight Famous Brands. Only One Passed.

Wikipedia, Apple, Microsoft, OpenAI, Anthropic, Perplexity, Facebook, and Google walked into our scoring engine. Seven walked out with grades that surprised us — and one bug we found in public.

Read
N°022026-04-25
Methodology Note4 min read

ChatGPT Won't Recommend Anthropic. Claude Won't Recommend OpenAI. We Have the Data.

A 4x4 matrix of LLM raters scoring LLM creators surfaced a reciprocal suppression pattern between OpenAI and Anthropic specifically. The pattern doesn't extend to other AI companies — only to the closest direct competitors.

Read
Run the same panel

Curious how your domain scores against the same panel that just told Wikipedia it has room to grow?

Check your score — free
SourcepullSourcepull

AI visibility audits for local businesses. Find out what ChatGPT, Perplexity, and Google AI say about you — and fix it.

Pages

  • How it works
  • What you get
  • Pricing
  • FAQ
  • Blog
  • Research
  • Methodology

Company

  • Founders
  • Contact

Legal

  • Privacy Policy
  • Terms of Service
© 2026 Sourcepull. Burlington, Ontario, Canada.
PrivacyTerms