Skip to main content
Back to Blog
Research

45 Billion Sessions a Month: AI Search Outgrew Its Measurement Stack

In the first week of March 2026, Graphite.io CEO Ethan Smith published a study that quantified something the industry had been estimating with varying degrees of optimism for two years: AI assistants now generate 45 billion sessions per month worldwide, equivalent to 56% of global search engine volume. When filtered to search-like queries only, the figure is 28% globally and 17% in the US. The study also found that 83% of global AI usage occurs inside mobile apps, largely invisible to traditional web analytics. Total usage across search and AI has grown 26% globally since 2023. AI is expanding discovery, not replacing search.

TL;DR

In March 2026, Graphite.io quantified what the industry had been estimating: AI assistants generate 45 billion sessions per month, equivalent to 56% of global search engine volume. Even filtering to search-like queries only, AI accounts for 28% of search worldwide and 17% in the US. The behavioral shift is confirmed by independent sources: 37% of consumers now start searches with AI (Eight Oh Two), 58% use AI for product recommendations (HBR), and 97% of enterprise digital leaders say AEO/GEO is delivering measurable impact (Conductor). The conversion evidence converges from three independent studies: Microsoft Clarity found AI traffic converts at 3x search across 1,200 publisher sites; Adobe documented a 693% YoY surge in AI retail referrals that convert 31% higher; Exposure Ninja measured a 14.2% conversion rate against Google organic's 2.8%. Yet 93% of AI Mode sessions end without a click (Semrush), AI recommendations differ more than 99% of the time for the same prompt (SparkToro), and Forrester projects enterprises will defer 25% of planned AI spend into 2027 due to ROI concerns. Sill's own monitoring data across 139 brands and 86 industries shows 23% score zero SOV, 55% have 10+ point platform divergence, and 91.6% of cited URLs appear on only one platform. The scale grew. The measurement infrastructure did not.

Abstract visualization of data flowing through a measurement gap, representing the disconnect between the scale of AI search adoption and the analytics infrastructure available to track it

The behavioral shift is no longer theoretical

Until Q1 2026, the evidence for consumer AI adoption was mostly directional: surveys with small samples, vendor claims, and anecdotal traffic reports. That changed. Three independent studies with different methodologies now converge on the same conclusion: a substantial share of consumers have already moved to AI-first discovery.

Eight Oh Two surveyed 500 AI-using consumers and found that 37% now start searches with AI instead of Google. 60% said AI delivers better, clearer answers than traditional search. 85% still cross-reference AI answers with traditional search, which means AI is shaping initial consideration sets even when Google gets the final click. 59% believe AI will become their primary search tool soon.

The HBR article “Forget What You Know About Search” (Dubois, Dawson, and Jaiswal, June 2025) documented the shift at a larger scale: 58% of 12,000 surveyed consumers turn to generative AI for product and service recommendations, up from 25% in 2023. The same article introduced the “Share of Model” concept: the percentage of AI-generated recommendations in a category that mention a specific brand. The framing is significant because it treats AI recommendation share as the successor to share of voice, share of search, and share of market.

Conductor's State of AEO/GEO Report (January 2026) surveyed 250+ enterprise digital leaders at 500+ employee organizations across 12 industries. 97% reported AEO/GEO is already driving measurable positive business impact. 94% plan to increase investment in 2026. The average allocation is 12% of digital budgets. AEO/GEO ranked as the number one strategic marketing priority for 2026.

SignalValueSourceDate
AI sessions as % of global search volume56% (45B sessions/mo)Graphite.ioMar 2026
Consumers starting searches with AI37%Eight Oh Two (n=500)Nov 2025
Consumers using AI for product recommendations58% (up from 25% in 2023)HBR / Dubois, Dawson, Jaiswal (n=12,000)Jun 2025
Enterprise leaders reporting positive AEO/GEO impact97%Conductor (n=250+)Jan 2026
Gen Z preferring AI over traditional search82%Yext consumer survey2025-2026
Shoppers expecting to use agentic AI for purchases within 12 mo60%Kearney survey / HBRMar 2026

Three independent sources confirm the conversion premium

The conversion evidence from AI-referred traffic has been building for over a year. Q1 2026 brought the first independent convergence from studies with meaningfully different methodologies, sample sizes, and measurement periods.

Microsoft Clarity studied 1,200+ publisher sites (November 2025) and found that AI-sourced visitors convert to sign-ups at 1.66%, versus 0.15% from search, 0.13% from direct, and 0.46% from social. Copilot-referred traffic converts at 17x the rate of direct traffic. Perplexity-referred traffic converts at 7x the rate of both direct and search. The study also measured a 155.6% growth in AI referral traffic over eight months, though AI traffic still represents less than 1% of total traffic for the average site.

Adobe's Holiday 2025 data told a commerce-specific story: AI referral traffic to retail sites surged 693% year over year during November and December. AI referrals converted 31% higher than other traffic sources. Revenue per visit from AI referrals increased 254% year over year. Shoppers arriving from AI were 33% less likely to bounce. Travel traffic from AI was up 539%, financial services 266%, tech and software 120%.

Exposure Ninja's March 2026 analysis measured a 14.2% conversion rate for AI search traffic versus Google organic's 2.8%: a 5x multiple that aligns with the HBR article's reporting of a 1,300% surge in AI search referrals to US retail sites during the 2024 holiday season.

These three studies use different populations (publishers, retailers, cross-industry), different measurement periods, and different conversion definitions. They arrive at the same directional conclusion: AI-referred traffic converts significantly better than organic search traffic. The signal is robust across methodologies. The problem, as the attribution gap analysis detailed, is that most of this traffic is invisible in standard analytics.

SourcePopulationAI conversion premiumDate
Microsoft Clarity1,200+ publisher sites3x search; Copilot 17x directNov 2025
Adobe Holiday ReportUS retail sites+31% vs other traffic; +254% rev/visit YoYJan 2026
Exposure NinjaCross-industry14.2% vs Google organic 2.8% (5x)Mar 2026
PrevisibleSaaS and services+527% YoY AI session growth2025

93% of sessions leave no trace

The AI search behavioral shift is large. The measurement infrastructure for it is close to nonexistent.

Semrush data from 2026 shows that 93% of Google AI Mode sessions end without an external click. Bain & Company reports 60% of all searches now complete without clicking through. For queries with AI Overviews specifically, the zero-click rate is 83%, compared to 60% for traditional results. Users spend 49 seconds in AI Mode versus 21 seconds in AI Overviews; the longer engagement suggests more considered answers, not quick glances.

SparkToro tested 2,961 prompts across ChatGPT, Claude, and Google AI Overviews and found that AI tools produce different brand recommendation lists more than 99% of the time when given the same prompt. Only 30% of brands remain visible in back-to-back responses for the same query. Superlines tracked a 35.9% brand visibility decline over five weeks of monitoring the same prompts. This is the before-and-after measurement problem quantified at the individual query level.

This non-determinism compounds the measurement problem in a specific way: a brand that appears in an AI recommendation today may not appear tomorrow, and the fluctuation is not driven by anything the brand did or did not do. 40 to 60% of AI citation sources change monthly. Model updates shift SOV by 10+ points independent of any content change. The system rewards freshness and authority, but it does so inconsistently, and the inconsistency is invisible to any measurement framework that checks visibility periodically rather than continuously.

Measurement gapValueSource
AI Mode sessions with zero clicks93%Semrush, 2026
AI-referred traffic invisible in GA470.6%SparkToro
AI recommendations differing for same prompt>99%SparkToro (2,961 prompts)
Brands remaining visible in back-to-back responses30%Superlines
Brand visibility decline over 5 weeks35.9%Superlines
AI citation sources changing monthly40-60%AirOps / Superlines

What the citation data reveals about what drives visibility

SE Ranking's 2.3 million page study is the largest citation analysis published to date and offers more granular insight into what AI platforms actually cite than earlier studies. Sites with 1.16 million or more monthly visitors earn 6.4 citations per response versus 2.4 for sites with fewer than 2,700 visitors: a 3x difference. Sites with 32,000+ referring domains are 3.5x more likely to be cited by ChatGPT. ChatGPT values backlinks roughly 2x more than Google AI Mode. Pages with FAQ sections get 4.9 citations versus 4.4 without. Readable text (Flesch-Kincaid Grade 6 to 8) earns 4.6 citations versus 4.0 for Grade 11+.

Semrush's study of 230,000+ prompts and 100 million+ citations found that long-form articles (500 to 2,000 words) account for the largest share of AI citations, and 54 to 64% of cited posts focus on sharing knowledge or practical advice. Reddit was cited in nearly 60% of ChatGPT responses in early August before collapsing to roughly 10% by mid-September: a single platform's citation share can shift by 50 points in six weeks.

BrightEdge's research found that sites with author schema are 3x more likely to appear in AI answers, structured data and FAQ blocks increase citations by 44%, and updates within 60 days make a site 1.9x more likely to be cited. AI Overviews now trigger on nearly 48% of all tracked searches, a 58% increase year over year. Industries with the highest adoption: Healthcare (88%), Education (83%), and B2B Technology (82%).

These data points describe a system that rewards specific content attributes (freshness, structure, readability, authority) while remaining fundamentally non-deterministic in how it selects which sources to surface on any given query. The tactics that improve citation likelihood are identifiable; the ability to predict whether a specific piece of content will be cited in a specific response is not.

What Sill's monitoring data adds to the picture

Sill's monitoring pipeline queries the actual chat interfaces of ChatGPT, Gemini, Perplexity, and Google AI Overviews with web search enabled; not the developer APIs that most monitoring tools use. The distinction matters because APIs skip web search, citations, and retrieval-augmented generation entirely. The chat interface that 300 million weekly ChatGPT users interact with is a different product than the API, and it produces different brand recommendations.

Across 182 AI visibility analyses spanning 139 brands and 86 industries, the data confirms several findings from the third-party research above while adding platform-level granularity that aggregate studies miss.

23% of brands score zero SOV across all four platforms: completely invisible to AI recommendations. The median SOV is 15 out of 100, and 34% of brands cluster in the 11-to-20 range. The distribution is bimodal: brands tend to be either invisible or moderately visible, with relatively few in between.

Platform divergence is more severe than aggregate scores suggest. 55% of brands have a 10+ point SOV spread between their best and worst platform. Gemini averages 23.6 SOV (the most generous), while Perplexity averages 15.0 with a 56% zero-SOV rate (the most selective). A brand that tracks only one platform, or averages across platforms, sees a number that does not represent its actual visibility on any single platform. This is relevant to the measurement discussion because most monitoring tools report aggregate SOV without platform decomposition.

PlatformAvg SOVZero-SOV rateCharacter
Gemini23.6LowestMost generous; recommends broadly
Google AI Overviews19.8ModerateFollows organic ranking signals
ChatGPT~18ModerateWidest distribution; polarized
Perplexity15.056%Most selective; ignores most brands

Citation overlap between platforms is minimal. 91.6% of URLs cited across the dataset appear on exactly one platform. Only 0.1% of pages are cited by all four. The practical implication: optimizing content for AI citation is not a single optimization problem; it is four distinct problems running in parallel, each with different source preferences and retrieval pipelines. Pages with 19 or more statistical data points earn 93% more citations than pages without (5.4 vs 2.8 average citations). YouTube content earns 14.0 citations per page, 3.4x the rate of comparison and review sites. These patterns are consistent across platforms even though the specific pages cited are not.

The budget consequence is already visible

Forrester declared 2026 the year “the AI hype period ends” and projects that enterprises will defer 25% of planned AI spend into 2027 due to ROI concerns. Over 40% of agentic AI projects will be canceled by end of 2027. At the same time, consumer sentiment sits at a 10-year low (University of Michigan), Q1 2026 marketing budgets are under active review, and practitioners are publicly asking how to justify their AI search investment.

The tension is specific: 94% of enterprises are increasing AI search spend (Conductor) into a market where Forrester says a quarter of AI budgets will be deferred for lack of proven return. Both statements can be true simultaneously. The investment is increasing because the behavioral shift demands it; the deferral is happening because the measurement infrastructure cannot demonstrate what the investment produced.

This is the pattern that killed budgets in previous channels. Podcast advertising experienced a similar cycle: rapid adoption followed by measurement scrutiny followed by budget pullback for the brands that could not demonstrate impact beyond downloads. The brands that survived the pullback were those that had built attribution frameworks before the CFO asked for one. The PR industry's Barcelona Principles exist because PR survived its version of this cycle; the GEO market has not yet built its equivalent.

Google's AI Mode self-citation behavior adds another layer of complexity. SE Ranking's analysis of 68,313 keywords found that Google.com accounts for 17.42% of all citations in AI Mode, tripled from 5.7% in June 2025. In travel and entertainment, Google citations exceeded 48% of total answers. Google is increasingly citing itself. Brands optimizing for AI visibility must now account for a platform that preferentially surfaces its own properties.

The gap between adoption evidence and measurement capability

The data from Q1 2026 can be summarized in two columns. The left column contains evidence of a structural behavioral shift in how consumers discover, evaluate, and select brands. The right column contains the measurement infrastructure available to quantify the shift's impact on any individual brand.

The behavioral evidence is strong: multiple independent studies, large sample sizes, converging conclusions. The measurement infrastructure is functionally the same as it was in 2024: before-and-after SOV comparisons that cannot distinguish content impact from background noise, GA4 referral segments that systematically understate volume by 70%, and branded search proxies that conflate AI impact with every other channel driving brand awareness.

The attribution gap is not closing. It is widening, because the AI search channel is growing faster than the measurement tools tracking it. Forrester's 25% deferral projection is what happens when that gap persists: investment decisions made on incomplete data, with the predictable result that the investments most difficult to attribute get cut first.

The three-layer measurement framework mapped to PR's Barcelona Principles remains the structural answer: quasi-experimental SOV measurement at the top, branded search correlation in the middle, and downstream referral attribution at the bottom. The scale data from Q1 2026 makes the framework more urgent. It does not make it easier.

What this means for the next 12 months

HBR published “Preparing Your Brand for Agentic AI” (Acar and Schweidel, March 2026). The article reports that two-thirds of Gen Z and over half of Millennials use LLMs for product research, and 60% of US shoppers expect to use agentic AI for purchases within 12 months (Kearney survey). Sephora's AI tool users are 3x more likely to complete purchases, with returns reduced 30%. The next phase of AI search is not human users asking questions; it is AI agents making purchasing decisions on behalf of users.

When AI agents mediate purchasing, the zero-click problem becomes total: there is no click, no referral header, no GA4 session. The agent evaluates options, selects a vendor, and executes. The brand that appears in the agent's consideration set wins. The brand that does not, loses. The measurement question shifts from “how do I attribute this visit” to “how do I know my brand was in the consideration set at all.”

That is what Share of Voice monitoring measures, and why the proof layer on top of monitoring matters: the ability to demonstrate, with statistical confidence, that a specific action improved the likelihood of being in the consideration set. The monitoring data across 139 brands and 86 industries shows that 23% of brands score zero SOV across all platforms. The behavioral data from Q1 2026 confirms that the window to build measurement infrastructure before budget decisions are made in the dark is narrowing.

The scale data is in. The measurement window is open.

45 billion AI sessions a month, a 5x conversion premium, and 93% zero-click. Sill tracks your AI Share of Voice daily across ChatGPT, Gemini, Perplexity, and Google AI Overviews. Start building the measurement baseline before Q2 budget reviews begin.

References

  1. Smith, Ethan. “AI Assistants Now Generate Sessions Equal to 56% of Global Search Volume.” Graphite.io via Search Engine Land, March 2026. searchengineland.com
  2. Eight Oh Two. “2026 AI Search Behavior Study: AI Is Now the First Stop for Search.” Eight Oh Two, November 2025. eightohtwo.com
  3. Dubois, David; Dawson, John; Jaiswal, Akansh. “Forget What You Know About SEO. Here's How to Optimize Your Brand for LLMs.” Harvard Business Review, June 2025. hbr.org
  4. Conductor. “State of AEO/GEO Report.” Conductor Academy, January 2026. conductor.com
  5. Microsoft Clarity. “AI Traffic Converts at 3x the Rate of Other Channels.” Microsoft Clarity Blog, November 2025. clarity.microsoft.com
  6. Adobe. “AI-Driven Traffic Surges Across Industries.” Adobe Business Blog, January 2026. adobe.com
  7. Exposure Ninja. “AI Search Statistics.” Exposure Ninja Blog, March 2026. exposureninja.com
  8. SE Ranking. “AI Statistics: 2.3M Page Citation Analysis.” SE Ranking Blog, 2026. seranking.com
  9. Semrush. “Most-Cited Domains in AI: 230,000+ Prompts Analyzed.” Semrush Blog, 2026. semrush.com
  10. BrightEdge. “AI Hyper Cube: Brand AI Search Visibility Research.” BrightEdge, March 2026. brightedge.com
  11. SparkToro. “AI Recommendations Change With Nearly Every Query.” SparkToro Blog, 2025-2026. sparktoro.com
  12. Superlines. “AI Search Statistics 2026.” Superlines.io, 2026. superlines.io
  13. Seer Interactive. “AI Overview Impact on Google CTR.” Seer Interactive, September 2025. seerinteractive.com
  14. SE Ranking. “Google Is Citing Google More in AI Mode.” SE Ranking Blog, February 2026. seranking.com
  15. Acar, Oguz A.; Schweidel, David A. “Preparing Your Brand for Agentic AI.” Harvard Business Review, March 2026. hbr.org
  16. Previsible. “2025 State of AI Discovery Report.” Previsible.io, 2025. previsible.io

Get Your Report

Request your first analysis today to see where you stand.

Daniel Wang

Founder · UC Berkeley MIDS

Previously at Nordstrom, Bloomberg, Hexagon (now Octave)

Related reading