Skip to main content
Back to Blog
Research

We Analyzed 139 Brands Across 86 Industries. 23% Are Completely Invisible to AI.

A few months ago, a VP of Marketing at a Series B SaaS company showed me her team's competitive analysis. It was thorough: SEO rankings, paid search spend, social followers, review site ratings. She knew exactly where her brand stood on every channel that mattered. Then I asked her where the brand ranked when a buyer asked ChatGPT for a recommendation in her category. She paused, opened a browser, typed in the query, and watched as the response listed four competitors by name. Her company was not among them. That moment has played out in nearly every conversation we've had with brands over the past year. It is the reason we built Sill, and it is the reason we are publishing this data.

TL;DR

Across 182 AI visibility analyses spanning 139 brands and 86 industries, 23% of brands score zero SOV — completely invisible across ChatGPT, Gemini, Google AI Overviews, and Perplexity. The median score is 15 out of 100. Gemini is the most generous platform (avg 23.6 SOV); Perplexity is the most selective (avg 15.0, 56% zero-SOV rate). In B2B SaaS, scores range from Wiz (53.2) to Squarespace (0). YouTube is the #1 cited source by 8x, with 674 citations across 48 pages. Third-party mention density is the strongest predictor of AI visibility (0.664 correlation, Ahrefs). Domain authority and backlinks show near-zero correlation with AI recommendations.

Film photograph of a foggy mountain landscape, representing the obscured view brands have of their AI visibility

What we measured

Over the past several months, Sill's pipeline has completed 182 AI visibility analyses across 139 distinct brands spanning 86 market segments. Each analysis queries ChatGPT, Gemini, Google AI Overviews, and Perplexity with purchase-intent prompts tailored to the brand's category: the kinds of questions a real buyer would type into an AI assistant before making a decision. We score each response for Share of Voice (SOV) on a 0 to 100 scale, measuring whether the brand was mentioned, in what position, and with what sentiment.

The brands range from early-stage startups with a handful of employees to publicly traded enterprises with billions in revenue. The industries span B2B SaaS, healthcare, cybersecurity, e-commerce, manufacturing, clean energy, fintech, education, and dozens more. This is not a narrow vertical study; it is a cross-industry snapshot of how AI engines perceive the commercial landscape in 2026.

The distribution nobody expected

When I first looked at the aggregate data, I expected a bell curve: most brands clustered in the middle, a few outliers on each end. That is not what the data shows. The distribution is bimodal: brands either have meaningful AI visibility or they have almost none.

SOV RangeRunsShareWhat it means
0 (invisible)4123%AI never mentions the brand across any platform
1 – 102011%Occasional mentions, low position, inconsistent
11 – 206234%Present but not prominent; often mid-list or qualified
21 – 3095%Competitive; regularly recommended in multi-brand lists
31 – 503620%Strong presence; frequently top-3 in recommendations
51 – 100148%Dominant; the brand AI reaches for first

The median SOV is 15 out of 100. The mean is 19.4. But these averages obscure the real story: there is a chasm between the 23% of brands that score zero and the 28% that score above 30. The middle ground is thin. AI visibility is not a gradient; it is closer to a binary. You are either in the conversation or you are not.

For context, when we published our first SOV analysis of 77 brands several months ago, the median was 3.8. The improvement reflects both a broader dataset and the fact that more brands are actively investing in AI visibility. But the structural pattern has not changed: a large cohort remains at zero regardless of their size, domain authority, or SEO performance.

Not all platforms are equally generous

Each AI platform has its own citation behavior, its own source preferences, and its own threshold for when a brand earns a mention. Across 1,000 platform-level measurements, the differences are consistent and significant.

PlatformAvg SOVMax SOVZero-SOV Rate
Gemini23.610049%
Google AI Overviews20.910049%
ChatGPT18.310050%
Perplexity15.086.156%

Gemini is the most generous recommender, averaging 23.6 SOV. Perplexity is the most selective, averaging 15.0 and ignoring brands entirely 56% of the time. ChatGPT falls in the middle but has the widest distribution: when it does mention a brand, it is more likely to give that brand a strong position. Google AI Overviews mirrors its parent search engine's behavior, leaning on organic ranking signals more heavily than the other three.

The practical consequence: a brand that monitors only one platform is seeing, at best, a quarter of the picture. We have documented this in detail in our platform divergence analysis, where we found that 55% of brands have a gap exceeding 10 SOV points between their best and worst platform.

The B2B SaaS leaderboard: size is not the predictor you think

B2B SaaS brands make up the largest single cohort in our dataset. The spread within this cohort is striking: the highest-scoring brand, Wiz, earns a 53.2 SOV; the lowest, Squarespace, earns zero. Both are well-known companies. Both have strong traditional SEO profiles. The difference is not brand recognition or domain authority. It is how the brand shows up in the places AI engines look.

BrandSegmentSOV Score
WizCloud Security53.2
QuadientCustomer Communications48.8
FortinetCybersecurity45.5
AvalaraTax Compliance45.0
Procore TechnologiesConstruction39.8
HeyGenAI Video32.5
ZenMaidCleaning Business Software19.5
IllumioCybersecurity6.3
SquarespaceWebsite Builder0
TrademarkiaLegal Tech0

Wiz and Illumio are both cybersecurity companies. Fortinet is as well. Yet Wiz scores 53.2, Fortinet scores 45.5, and Illumio scores 6.3. The gap between Wiz and Illumio is not explained by company size, revenue, or brand age; it is explained by the density of third-party content that mentions these brands in the contexts AI engines care about. Wiz has deep coverage across review platforms, comparison content, and technical publications. Illumio's web footprint, while strong for traditional search, is thinner in the third-party citation layer that AI engines weight most heavily.

This pattern holds across verticals. The Similarweb GenAI Brand Visibility Index found the same thing at a larger scale: content-authority brands like NerdWallet jumped 66 positions above brands with far more search volume. Our data confirms that AI visibility is earned through a different set of signals than the ones traditional marketing stacks are built to measure.

Where AI engines actually look for answers

When we trace the citations that AI engines attach to their recommendations, a clear hierarchy emerges. The data comes from 1,000 cited pages tracked across our pipeline. The dominance of certain source types is not subtle.

SourceUnique Pages CitedTotal CitationsAvg Citations Per Page
YouTube4867414.0
Reddit23863.7
Review / comparison sites351343.8
Tech publications21753.6
Forbes / major publications7284.0

YouTube is the single most cited domain in our dataset by a factor of eight. It accounts for 674 total citations across 48 unique pages, averaging 14 citations per page. The next closest source, review and comparison sites, manages 3.8 citations per page. This finding aligns with Ahrefs' large-scale study of 75,000 brands, which found YouTube presence has a 0.737 correlation with AI mentions: the strongest single predictor of any factor measured.

Reddit is the second most important source but for a different reason: it dominates Perplexity's citation behavior specifically, appearing in 46.7% of that platform's top-10 cited sources. Review sites like rtings.com, soundguys.com, and tomshardware.com appear repeatedly because they produce exactly the kind of structured comparison content that AI engines extract most reliably: specifications, head-to-head comparisons, and definitive recommendations with supporting data.

One number stands out above the rest: 74% of cited pages appear on two or more platforms. Getting cited by one AI engine is common. Getting cited by multiple AI engines requires a level of content authority that most brand-owned pages have not yet achieved. The pages that cross this threshold tend to be third-party reviews, YouTube videos, and comparison articles on editorially independent sites.

The crowded middle: where most brands fight for scraps

The largest single bucket in the distribution is the 11–20 SOV range, which contains 34% of all brands. These are brands that AI mentions sometimes, in some contexts, on some platforms. They appear in recommendation lists, but rarely at the top. They are included in comparisons, but without the enthusiastic framing that drives buyer action. They are present without being prominent.

For these brands, the competitive dynamics are about to get uncomfortable. As more companies invest in Generative Engine Optimization, the middle tier will get more crowded. The brands that break out of the 11–20 range will be the ones that systematically build the off-site signals AI engines weight most heavily: third-party mentions, YouTube content, review platform presence, and editorial coverage. The brands that do not will find themselves pushed toward zero as competitors fill the citation layer above them.

We see this pattern already in crowded verticals. In customer support and helpdesk software, five brands in our dataset average a 5.7 SOV. In email marketing, five brands average 3.8. In project management, five brands average 5.1. When an AI engine fields a query in these categories, it has dozens of valid brands to choose from; the ones that have built the strongest third-party citation layer are the ones that appear. The rest are invisible, regardless of how good the product is.

SegmentBrands in DatasetAvg SOVAssessment
Customer Support / Helpdesk55.7Highly competitive; low visibility across the board
Project Management55.1Saturated category; even leaders struggle for mentions
Email Marketing / Automation53.8Crowded; AI spreads mentions thinly across many brands
Digital Marketing Agency43.8Service businesses have weakest AI presence overall

What separates a 50 from a zero

The question this data demands is straightforward: what are the brands with high SOV scores doing differently? We have written extensively about the specific tactics in our evidence-ranked GEO tactics analysis, but the patterns in this dataset reinforce three structural themes.

Third-party citation density is the primary differentiator. The strongest predictor of AI visibility is not what is on your website; it is how often your brand is mentioned on other people's websites. Ahrefs found a 0.664 correlation between branded web mentions and AI citations across 75,000 brands. Our data confirms this: the brands scoring above 40 SOV universally have deep third-party coverage across review platforms, comparison articles, YouTube reviews, and editorial press. The brands scoring zero almost always have thin or nonexistent off-site mention profiles.

YouTube is the highest-leverage channel most brands are ignoring. With 674 total citations and a 14.0 citations-per-page average, YouTube content is by far the most efficient citation source in our dataset. A single well-positioned YouTube review can generate more AI visibility than dozens of optimized blog posts. The brands in our dataset that score highest tend to have active YouTube ecosystems: product reviews, tutorials, comparisons, and unboxing videos created by independent creators, not just the brand's own channel.

On-site content matters, but differently than in SEO. The anatomy of pages that AI engines cite follows a specific structural pattern: answer capsules after question-framed headings, statistics with source attribution, comparison tables, and content updated within 90 days. Domain authority and backlink count show near-zero correlation with AI citation. What matters is whether the content is structured in a way an AI engine can extract, verify, and present to a user.

What this means for your brand

The VP of Marketing I mentioned at the start of this piece did something about it. She started tracking her AI visibility daily, identified the off-site gaps her competitors had filled, and invested in the third-party content layer that AI engines weight most heavily. Within weeks, her brand went from absent to mentioned; within months, from mentioned to recommended. The brands that act on this data now will have a structural advantage that compounds over time. The brands that wait will find themselves in the 23%.

The first step is knowing your number. The second step is understanding why it is what it is. The third step is doing something about it.

Find out where your brand stands

Sill monitors your AI Share of Voice daily across ChatGPT, Perplexity, Gemini, and Google AI Overviews. See your score, benchmark against competitors, and get prioritized recommendations for what to fix first.

Methodology

All data in this analysis comes from Sill's production monitoring pipeline. Each brand analysis queries four AI platforms (ChatGPT, Gemini, Google AI Overviews, Perplexity) with purchase-intent prompts tailored to the brand's market segment. SOV is calculated as a composite of position weight, sentiment weight, and mention rate on a 0–100 scale. Platform-level scores are averaged across all queries for a given run. Aggregate SOV is the mean of platform-level scores. Cited pages are tracked via the citations attached to AI-generated responses and deduplicated by URL. Data reflects completed runs as of March 2026.

References

  1. Aggarwal et al. "GEO: Generative Engine Optimization." KDD 2024. arxiv.org
  2. Ahrefs. "LLM Brand Visibility Study: 75,000 Brands." Ahrefs Blog, 2025. ahrefs.com
  3. SE Ranking. "AI Citations Study: 129,000 Domains." SE Ranking Blog, 2025.
  4. Similarweb. "GenAI Brand Visibility Index." Similarweb, 2026.
  5. Profound. "680M Citation Analysis: Platform Source Preferences." Profound Research, 2025.
  6. Chen et al. "AI Search Citation Analysis: Earned Media Bias." University of Toronto, 2025.
  7. BrightEdge. "YouTube Citation Growth in AI Overviews." BrightEdge Research, 2025.

Get Your Report

Request your first analysis today to see where you stand.

Daniel Wang

Founder · UC Berkeley MIDS

Previously at Nordstrom, Bloomberg, Hexagon (now Octave)

Related reading