A marketing director at a mid-size SaaS company recently sent us a screenshot that captured a problem we had been circling for months. ChatGPT had answered a product comparison query by recommending a competitor, citing that competitor by name three times in its response, and linking to a source URL as evidence. The URL was a blog post on the marketing director's own company domain. Her team had written the content, published the research, earned the citation from the AI, and watched a competitor collect the recommendation. Seer Interactive now has a name for this pattern and 541,213 LLM responses worth of data to quantify how often it happens: they call it a ghost citation, and their February 2026 analysis across 20 brands and six AI platforms suggests it is measurable in every sector they examined.
TL;DR
Seer Interactive analyzed 541,213 LLM responses across 20 brands and found that citations are post-hoc: the AI decides which brands to recommend from parametric memory first, then finds URLs to cite as evidence. A ghost citation occurs when your URL is cited but a competitor is mentioned instead. The awareness stage has the highest ghost citation rate at 5.0%. Category leaders with dominant entity graphs see near-zero rates. Entity-level signals (YouTube mentions at 0.737 correlation, brand web mentions at 0.664) predict AI recommendations far more strongly than page-level authority. Only 20% of brands achieve both citations and mentions. Monitoring that separates these layers is the prerequisite for detection.

A ghost citation occurs when an LLM cites a brand's URL but mentions a competitor by name instead. Seer Interactive found this pattern across all 20 brands studied.
Seer Interactive defines a competitive ghost citation as the specific case where a brand's URL is cited in an AI response, the brand itself is not mentioned by name, and a competitor is mentioned instead. The term "ghost" is apt because the brand's content is present in the response's bibliography but invisible in its recommendations. As Seer frames it: "If your content is informing that conversation and your name is not in the answer, you are funding your competitor's first impression on a buyer who has never heard of either of you."
The distinction between being cited and being mentioned turns out to be larger than most monitoring tools acknowledge. When a brand is mentioned in a response, its citation rate is 53.1%. When the brand is not mentioned, that same brand's citation rate drops to 10.6%, a 5x differential that runs in the wrong direction for anyone assuming that earning a citation is the same as earning a recommendation. The content clears the retrieval check (the AI considers it trustworthy enough to reference) while the brand fails the recommendation check (the AI does not consider it relevant enough to name).
This aligns with what we observed in our own citation anchor study: URL stability and brand stability operate on different axes. Perplexity re-cites 62.4% of yesterday's URLs but has the lowest brand primary rate at 6.7%. The AI can trust your content repeatedly and still never recommend you.
Seer Interactive's six behavioral tests across 362,188 LLM responses suggest citations are post-hoc: the AI decides which brands to name first, then finds URLs to support those choices.
The most consequential finding in Seer's research is a hypothesis backed by six independent behavioral tests across 362,188 LLM responses: the AI generates its brand recommendation from parametric memory (the knowledge encoded during training) first, then goes looking for citations to support the choice after the fact. The citations are the bibliography, not the brainstorm. If this model is correct, and Seer is careful to note they cannot observe token generation logs directly, then the entire premise of "earn a citation to earn a recommendation" is backwards. A brand can produce the most authoritative content in its category, earn consistent retrieval, and still never be recommended because the model's parametric memory does not associate the brand strongly enough with the query topic.
RankScience describes the same mechanism using slightly different language: AI platforms evaluate content through an "evidence check" (is this accurate and useful enough to cite?) and a separate "recommendation check" (does this brand show up consistently in trusted places as a real solution?). Passing one does not guarantee passing the other. A brand's blog post can be the best source of evidence on a topic while the brand itself lacks the entity-level salience that would make the AI name it.
This maps directly onto the correlation data from Ahrefs' study of 75,000 brands: brand web mentions have a 0.664 correlation with AI visibility, brand search volume has a 0.392 correlation, and YouTube mentions reach 0.737, the highest single factor across all platforms they measured. Domain Rating, the traditional authority metric, correlates meaningfully lower. The signals that drive AI recommendations are entity-level brand signals, not page-level content signals, and that distinction is the gap where ghost citations live.
Ghost citation rates peak at 5.0% during the awareness stage, the most damaging funnel position because awareness queries are category-formation moments for buyers.
Seer tested ghost citation rates across five funnel stages and found the highest competitive ghost citation rate at the awareness stage: 5.0%. This matters disproportionately because awareness-stage queries are category-formation moments, the prompts that sound like "what tools exist for X" or "how do companies solve Y" where a buyer is forming a mental shortlist before they know enough to ask comparison or evaluation questions. If your content is the cited source for a category-defining answer and your name is absent from the recommendation list, a competitor is getting introduced to a prospect using your expertise as the evidence.
The category-level data makes this clearer. Brands with dominant entity graphs in their categories see near-zero ghost citation rates: Industrial Services at 0.3%, Financial Services and HR Technology both under 2%. These are brands that have invested years in building the kind of entity-level salience that parametric memory rewards, through consistent brand mentions across authoritative sources, high brand search volume, and deep topical association. The gap between a 0.3% ghost citation rate and a 5.0% rate is the gap between a brand the AI already knows and a brand it learns about only from the content it retrieves.
Semrush's AI Visibility Index data puts a broader frame on the problem: fewer than one in five brands (20%) achieve both frequent mentions and consistent citations in AI answers. The remaining 80% sit on one side of the divide or the other, and the ghost citation phenomenon describes what happens to brands stuck on the citation side without the mention side.
Only 11% of domains are cited by both ChatGPT and Perplexity, and 85% of brand mentions in AI responses come from third-party pages brands cannot control.
Ghost citations compound with two other structural features of AI search that Sill has been tracking. The first is platform divergence: Omniscient Digital's analysis of 23,387 unique sources across 240 prompts found that only 11% of domains are cited by both ChatGPT and Perplexity. A brand can be a trusted source on one platform and completely absent from another, meaning ghost citation patterns are likely platform-specific as well. Our own data confirms this at the brand level: 91.6% of cited URLs appear on only one AI platform, and 55% of brands have a 10+ point SOV spread between their strongest and weakest platforms.
The second structural feature is the third-party dominance problem. AirOps' 2026 State of AI Search report found that 85% of brand mentions in AI responses come from third-party pages, not from brands' own domains. Brands are 6.5x more likely to be cited through external sources than through their own websites. When combined with the ghost citation finding, the picture that emerges is that a brand's own content can earn citations (because it passes the evidence check) while third-party content is what drives mentions (because third-party coverage in authoritative venues is what builds the entity-level salience that parametric memory draws on). The brand creates the evidence but someone else's coverage of the brand is what earns the recommendation.
| Signal | Correlation with AI visibility | Source |
|---|---|---|
| YouTube mentions | 0.737 | Ahrefs, 75K brands |
| Brand web mentions | 0.664 | Ahrefs, 75K brands |
| Brand anchor text | 0.527 | Ahrefs, 75K brands |
| Brand search volume | 0.392 | Ahrefs, 75K brands |
| Brand search volume | 0.334 | Digital Bloom, 300K keywords |
| Domain Rank | 0.25 | Seer Interactive |
| Brand MSV | 0.18 | Seer Interactive |
The top three predictors of AI brand visibility are all entity-level signals: YouTube mentions, brand web mentions, and brand anchor text. Page-level authority (Domain Rank) and monthly search volume sit at the bottom. Ghost citations are the measurable consequence of brands that have invested in page-level content quality without building entity-level salience.
AirOps found only 30% of brands persist between consecutive AI answers. Brands with both mentions and citations show 40% higher reappearance likelihood.
Ghost citations sit on top of an already fragile persistence landscape. AirOps' 2026 State of AI Search report found that only 30% of brands that appear in an AI answer show up again in the very next response to the same query, and when the same query is run five times in a row, just 20% of brands persist across all five. SparkToro's January 2026 analysis of 2,961 prompts corroborates this from a different angle: AI tools produce different brand recommendation lists more than 99% of the time when given the same prompt.
Our own brand persistence study found that the overall persistence rate across our 10-brand sample is 29%. Among brands that appeared on day N, 82.6% persist to day N+1, but absent brands break through only 6% of the time. Position creates a steep gradient: primary brands persist at 93%, secondary at 77%, mentioned at 54%. Momentum compounds on top of this: brands on 8-14 day streaks persist at 94%, and 15+ day streaks persist at 100% in our data.
The AirOps data adds a critical dimension: brands earning both citations and mentions show 40% higher likelihood of reappearing across answers compared to brands that earn only one or the other. A ghost-cited brand, by definition, earns citations without mentions, placing it in the lowest-persistence category. Ghost citations are not just a missed credit problem; they correlate with worse persistence outcomes because the brand is not building the dual visibility that compounds over time.
AI-cited content averages 1,064 days old (Ahrefs, 17M citations). Content updated within 90 days earns 67% more citations, but freshness alone does not prevent ghost citations.
Ahrefs' analysis of 16.975 million cited URLs found that AI-cited pages are 1,064 days old on average, or roughly 2.9 years. That is 25.7% fresher than the average organic Google result (1,432 days), but still old enough that a brand's competitive positioning and product capabilities may have shifted substantially since the content was written. ChatGPT shows the strongest freshness preference, citing URLs that are 393-458 days newer than organic Google results, while Perplexity cites the oldest content at an average of 1,166 days.
Content freshness does reduce ghost citation risk indirectly. SE Ranking found that content updated within 90 days earns 67% more AI citations, and BrightEdge found that content updated within 60 days makes a site 1.9x more likely to be cited. Pages not updated quarterly are 3x more likely to lose their citation positions entirely (AirOps). But freshness operates at the citation layer, not the mention layer. A brand can update its content every 30 days, earn more citations than ever, and still be ghost-cited if the brand itself lacks entity-level salience in the model's parametric memory.
Seer Interactive found an adjacent signal worth tracking: listicle citations dropped 30% month-over-month in early 2026, from 160,000 to 111,000 across 2 million citations. The citation landscape is consolidating toward fewer, more authoritative sources, which means the remaining citations carry more weight and ghost citations become proportionally more costly when they occur.
Wil Reynolds of Seer Interactive found that changing a website footer rewrote ChatGPT's brand narrative in under 36 hours, suggesting entity signals can shift faster than expected.
The correlation data from Ahrefs, Digital Bloom, and Seer Interactive converges on a consistent hierarchy for the signals that convert citations into mentions. YouTube presence correlates at 0.737 with AI visibility, brand web mentions at 0.664, and brand anchor text at 0.527. These are all entity-level signals: how often other sources name the brand in contexts relevant to the query topic, not how well the brand's own pages rank for related keywords. The practical implication is that content marketing alone, no matter how well-executed, operates primarily on the citation layer unless it also generates external coverage that builds entity-level association.
Wil Reynolds, Seer Interactive's founder, discovered an unexpectedly direct lever when he changed Seer's website footer to include specific positioning statements. ChatGPT's narrative about Seer shifted within 36 hours, rewriting a story the model had maintained for two years. This suggests that while entity salience takes years to build at scale, there are high-leverage structural elements on owned properties that can shift parametric associations faster than the broader brand-building timeline would suggest. Reynolds's description of the finding: "Everything I try to do is a guess, so best results...no idea. My most surprising test was when we changed our footer and the statements we put in our footer changed our answer about our brand in ChatGPT in 36 hours."
The defensive baseline is monitoring. A brand cannot fix ghost citations it cannot see, and most monitoring tools track citations and mentions as a single metric without distinguishing between them. Sill's daily monitoring tracks both layers independently across six AI platforms, which means a ghost citation pattern surfaces as a brand that has consistent citation presence but low or zero SOV on the same prompts. Our brand position study showed that the features predicting position are different from the features predicting persistence, and ghost citations sit in exactly this gap: the brand persists in the bibliography but not in the recommendation.
AI-referred visitors convert at 14.2% vs 2.8% for traditional organic search (Exposure Ninja, March 2026), a 5x multiple that ghost-cited brands forfeit entirely.
Three independent datasets now confirm that AI-referred traffic converts at a significant premium over traditional organic search. Exposure Ninja's March 2026 benchmark found a 14.2% conversion rate for AI referrals versus 2.8% for Google organic, a 5x multiple. Microsoft Clarity's analysis across 1,200+ publisher sites found AI-sourced visitors converting at 1.66% for sign-ups versus 0.15% for search, an 11x difference. Adobe's Holiday 2025 data showed AI referral traffic converting 31% higher with revenue per visit up 254%.
A ghost-cited brand forfeits this premium entirely. The AI uses the brand's content to build a trustworthy answer, recommends a competitor by name, and the buyer either clicks through to the competitor directly or runs a branded search for the competitor. The ghost-cited brand's content did the work of converting the buyer from curiosity to intent, but the intent now carries a competitor's name attached to it. With AI referral traffic growing 357% year-over-year and 58% of consumers reporting they have replaced traditional search with AI for product recommendations (Shoptalk 2026), the volume of purchase-intent traffic flowing through AI recommendations is large enough that losing it to ghost citations represents a measurable revenue leak.
The attribution gap makes this harder to detect. SparkToro's dark social research found that 70.6% of AI-referred traffic is unattributed in standard analytics, appearing as direct traffic because mobile apps strip referrer headers and users copy-paste URLs. A brand whose content is being ghost-cited may not even see the traffic it is failing to capture, because the referral never reaches its analytics at all. The measurement framework we outlined in our AI search ROI post addresses the attribution problem with six metrics: AI referral segmentation, branded search trends, money page conversion isolation, citation frequency, cross-platform SOV, and content-to-citation lag. Ghost citations add a seventh dimension: the gap between citation presence and mention presence on the same prompts.
Sill tracks citations and brand mentions independently across six AI platforms daily, surfacing ghost citation patterns as a gap between citation presence and SOV.
The operational response to ghost citations requires measurement that separates the citation layer from the mention layer. Most AI visibility tools aggregate these into a single score, which means a brand with high citation presence and zero mention presence can look healthy on a monitoring dashboard while a competitor collects the recommendations. Sill's daily monitoring tracks six AI platforms independently, recording both which URLs are cited and which brands are mentioned for every prompt in a brand's monitoring set. A ghost citation pattern surfaces as a brand with consistent citation presence but low or declining SOV on the same set of prompts.
The diagnosis layer connects ghost citation patterns to specific content and specific competitors. When Sill detects that a brand's URL is cited on a prompt where a competitor is mentioned, the Recommendations engine generates targeted interventions: entity-level brand signals to build (structured data, author schema, brand positioning in footers and about pages), content modifications that strengthen the brand-topic association on the specific pages being cited, and off-site coverage priorities in the venues that correlate most strongly with AI mention rates.
Seer Interactive's Wil Reynolds found that small structural changes can shift AI brand narratives surprisingly fast. But knowing which structural changes to make requires first knowing which prompts are ghost-citing your content, which competitors are capturing the recommendations, and which platforms exhibit the pattern most severely. Daily, multi-platform monitoring is the prerequisite for turning ghost citation detection into a systematic response rather than a set of ad hoc experiments.
Sill tracks citations and mentions independently across six AI platforms, surfacing the gap between content the AI trusts and brands it actually recommends.
Request your first analysis today to see where you stand.