Mozilla shipped Perplexity as a built-in search option in Firefox in early 2026, making it available to roughly 180 million monthly active users on desktop alone. At around 6.5% global browser share, Firefox reaches an estimated 350 million desktop users worldwide. When a query goes through Perplexity in Firefox's address bar, the user receives one synthesized answer with three to five cited sources; if a brand is not among those sources, it does not exist in that interaction. This is not an incremental shift in search engine market share. Browsers are becoming AI answer engines rather than search engine launchers, and the infrastructure through which consumers discover brands is being rewired at the browser level by Mozilla, OpenAI, Perplexity, and Samsung simultaneously.
TL;DR
Mozilla shipped Perplexity as a built-in search option in Firefox for 180 million desktop users. When a query goes through Perplexity in Firefox's address bar, the user gets one synthesized answer with 3-5 cited sources. 88% of brands ranking in Google's top 10 are never mentioned in corresponding AI answers (Searchless.ai). This is not isolated: Perplexity launched its own browser Comet (iOS #3 at launch), OpenAI released ChatGPT Atlas on macOS, Chrome added Gemini Auto Browse, and Samsung pre-installed Perplexity on the Galaxy S26 with OS-level access. Brand citation rates vary up to 9x across platforms (Copilot vs Google AI Mode). 93% of Google AI Mode sessions end without a click. GA4 cannot separate AI Mode traffic; over 80% of AI referrals arrive with stripped referrer headers. Content with proprietary statistics earns 3.2x more citations (GenOptima Q1 2026). The browser-native AI shift expands the affected audience from early adopters to the general population whose browser made the choice for them.

Firefox now offers Perplexity AI as a built-in search option for 180 million monthly desktop users, delivering synthesized answers with 3-5 cited sources per query.
Mozilla added Perplexity as a global search engine option in Firefox's address bar following regional trials in the U.S., U.K., and Germany that generated what Mozilla called "positive user feedback." The integration is not a plugin or extension; Perplexity appears alongside Google, Bing, and DuckDuckGo in the unified search button. Users can select it for individual queries or set it as their default. The feature shipped on desktop worldwide, with mobile versions planned for subsequent months.
The design of the integration matters as much as the fact of it. When a user selects Perplexity from the address bar, they receive a single conversational answer with inline citations rather than a list of ten blue links. The user sees one synthesized response. The response cites three to five sources. If a brand does not appear in those sources or in the answer text, the brand is functionally invisible for that query. This is a structural change in what a "search result" even means at the browser level, and it follows the same pattern that Firefox Version 148 reinforced with its AI Controls dashboard, which lets users manage all generative AI features from one settings panel.
Four competing AI browsers launched within six months: Perplexity Comet, OpenAI Atlas, Chrome's Gemini panel, and Samsung's Perplexity pre-install on Galaxy S26.
Firefox's integration is one node in a broader infrastructure shift. Perplexity launched its own standalone browser, Comet, on iOS in March 2026 after earlier releases on Windows, macOS, and Android; the iOS app reached #3 Overall on the U.S. App Store within 48 hours. Comet's context-aware assistant reads the active tab, summarizes page content, compares prices on e-commerce pages, and answers questions about what the user is currently viewing. OpenAI released ChatGPT Atlas, a Chromium-based browser with ChatGPT built into a sidebar assistant, on macOS in October 2025 and added an "Auto" mode in January 2026 that switches between ChatGPT and Google depending on the query. Google Chrome shipped an Auto Browse feature for Gemini Premium subscribers in January 2026 that completes tasks autonomously through the Gemini side panel.
Samsung delivered arguably the most consequential distribution play: Perplexity is pre-installed on the Galaxy S26 with OS-level access, making Perplexity the first non-Google company to receive that integration on a Samsung device. "Hey Plex" launches instantly, Bixby routes web searches through Perplexity, and Samsung Internet supports Perplexity as a default search option. The combined distribution of these moves covers desktop (Firefox, Atlas, Chrome), mobile (Comet, Samsung), and the browser settings layer that consumers rarely revisit once configured.
88% of brands ranking in Google's top 10 organic results are never mentioned in corresponding AI-generated answers, per Searchless.ai analysis.
Searchless.ai's analysis of brand visibility across traditional and AI search surfaces found that 88% of brands ranking in Google's top 10 organic results are never mentioned in corresponding AI-generated answers. The finding is consistent with what Sill's data has shown at smaller scale: only 14% of URLs cited by Google AI Mode rank in the traditional top 10, and 55% of brands have a 10+ point SOV spread between platforms. Traditional SEO rankings and AI visibility operate on different signal sets, and the browser-level integration of AI answers means that gap now affects a larger share of consumer queries.
The citation dynamics on Perplexity specifically compound this problem. GenOptima's Q1 2026 cross-platform benchmark found that Perplexity achieves an 11.4% brand mention rate with an average citation position of 1.3 when it does mention a brand, the highest citation quality of any platform measured; Google Gemini leads in citation volume with a 21.4% mention rate. Sill's URL anchor study found Perplexity returns to 62.4% of yesterday's cited URLs, twice the rate of any other platform, yet has the lowest brand primary rate in our data at 6.7%. Perplexity is consistent about which pages it cites but selective about which brands it names.
Brand citation rates vary up to 9x across AI platforms, with Microsoft Copilot citing brands at roughly nine times the rate of Google AI Mode.
Each browser-AI combination creates a distinct discovery surface with different citation behavior, answer formats, and brand visibility dynamics. The table below maps the current landscape as of April 2026.
| Browser / Integration | AI Engine | Distribution | Answer Format | Brand Visibility Behavior |
|---|---|---|---|---|
| Firefox + Perplexity | Perplexity | ~350M desktop users | Single answer, 3-5 citations | 11.4% mention rate; position 1.3 when mentioned; 62.4% URL persistence |
| Perplexity Comet | Perplexity | Standalone (iOS #3 at launch) | Tab-aware assistant + answer | Same engine; adds page-context queries |
| Samsung Galaxy S26 | Perplexity (OS-level) | Pre-installed, first non-Google OEM deal | "Hey Plex" voice + browser default | Mobile-first discovery; voice queries skew informational |
| OpenAI Atlas | ChatGPT | macOS (Windows/iOS planned) | Sidebar assistant; auto Google/AI switching | ChatGPT changes top brand 50.1% daily; 11.7% primary rate |
| Chrome + Gemini | Gemini 3 | ~3.5B users (Premium only for Auto Browse) | Side panel + autonomous task completion | 21.4% mention rate (highest volume); 93% zero-click in AI Mode |
| Edge + Copilot | Microsoft Copilot | ~680M users (built-in) | Sidebar chat + page context | Cites brands at ~9x the rate of Google AI Mode |
The variance in brand citation rates across these surfaces is the core problem. Microsoft Copilot cites brands at roughly nine times the rate of Google AI Mode, meaning a brand that appears reliably in one browser-AI combination may be entirely absent in another. This is not a distribution nuance; it is a structural fragmentation of brand discovery infrastructure, and single-point measurement captures less than half the competitive picture.
93% of Google AI Mode sessions end without a click to any website; AI Overviews reduce clicks to the #1 organic result by 58%.
Google AI Mode has reached 75 million daily active users, and 93% of those sessions end without a single click to any website. That rate is more than double the zero-click rate for AI Overviews, where 43% of queries result in no click. AI Overviews themselves reduce clicks to the top-ranking organic page by 58%. When these dynamics combine with browser-native AI answers from Perplexity and Atlas, the traditional funnel in which a consumer clicks a search result, visits a page, and evaluates a brand has been compressed or eliminated for a growing share of queries.
SparkToro's January 2026 research, which tested 2,961 prompts with 600 volunteers, found that fewer than 1 in 100 AI runs produce the same brand list and fewer than 1 in 1,000 produce the same order. The combination of zero-click behavior and non-deterministic brand recommendations means that visibility in AI answers is both more valuable (because the answer is the terminal interaction) and harder to measure (because the answer changes with each run). Perplexity referral traffic does show 2.4x higher engagement time, 1.5x higher pages per session, and 1.6x higher conversion rates compared to Google organic traffic, but that only accounts for the minority of interactions where a user clicks a citation link at all.
Content with proprietary statistics earns 3.2x more AI citations. 85% of brand mentions in AI responses originate from third-party pages, not from the brand's own domain.
GenOptima's Q1 2026 benchmark found that content with proprietary statistics earns 3.2x more citations than content without unique data. This aligns with the structural patterns Sill has documented across multiple studies: pages with 19+ statistics earn 93% more AI citations (SE Ranking), answer capsules appear in 87% of ChatGPT-cited posts (Search Engine Land), and the anatomy of an AI-cited page consistently favors data density, structured formatting, and self-contained factual statements.
AirOps found that 85% of brand mentions in AI responses come from third-party pages rather than the brand's own domain, with brands 6.5x more likely to be cited through external sources. When the total citation supply per query is three to five sources rather than ten blue links, the competitive dynamics compress further: a single authoritative third-party page that mentions your competitor can displace your own content entirely. Sill's data confirms the fragmentation: 91.6% of cited URLs appear on only one AI platform, so the pages that earn citations on Perplexity are largely different from those cited by ChatGPT or Gemini.
GA4 does not show Google AI Mode as its own traffic source. Search Console does not split out AI Mode clicks. Over 80% of AI traffic arrives with stripped referrers.
GA4 does not surface Google AI Mode as a separate traffic source; AI Mode visits appear as Organic or Direct, indistinguishable from traditional search. Search Console does not split out AI Mode clicks or impressions. When a user discovers a brand through Perplexity in Firefox and later visits the brand's site, that visit arrives with a stripped referrer or shows up as direct traffic. Over 80% of AI traffic arrives with stripped referrer headers, meaning standard analytics cannot attribute the visit to the AI interaction that generated it.
This is the measurement gap that browser-native AI makes urgent. When AI answers lived inside dedicated apps like ChatGPT or Perplexity's website, the affected audience was self-selecting: users who had specifically chosen to use an AI search tool. Browser integration removes that friction entirely. Users who never downloaded an AI app, never created a Perplexity account, and have no awareness that their search behavior has changed are now receiving AI-synthesized answers where they used to get Google results. The audience exposed to AI-mediated brand discovery has expanded by an order of magnitude, and the standard analytics stack cannot measure any of it.
The EU AI Act's Article 50 transparency obligations take effect August 2, 2026, requiring AI systems to disclose when users interact with AI-generated content.
The EU AI Act's transparency provisions under Article 50 become enforceable on August 2, 2026. The second draft of the Code of Practice on Transparency of AI-Generated Content, published March 3, 2026, addresses what the AI Office calls the gap "between legal obligation and technical reality." The requirements mandate that users be informed clearly when they interact with AI and when content is artificially generated or manipulated. A third and final version of the Code is expected by June 2026.
The regulatory framework does not currently mandate specific citation standards for AI search tools, but the transparency obligations could create pressure toward more explicit source attribution. If browser-native AI answers must label themselves as AI-generated and disclose their sourcing, the value of being a cited source increases: citation becomes a visible trust signal rather than a footnote that most users never expand. Brands with content structured for AI citability (proprietary data at 3.2x citation rate, answer capsules at 87% presence in cited posts, content freshness within 90 days for 67% more citations) would benefit disproportionately from any regulatory push toward transparency. Firefox's own AI Controls dashboard, which gives users granular control over AI features, suggests Mozilla is already anticipating this direction.
Brands need daily multi-platform AI monitoring because citation behavior differs 9x across platforms and browser-AI pairings fragment the discovery surface.
The browser-native AI shift creates three operational requirements that did not exist when AI search was confined to dedicated apps.
| Requirement | Why It Matters Now | Evidence |
|---|---|---|
| Multi-platform daily monitoring | Browser integrations multiply the surfaces where brand visibility diverges | 9x citation rate variance across platforms; 55% of brands have 10+ point SOV spread |
| Entity-level brand signal investment | AI answers decide brands from parametric memory, then find citations post-hoc | YouTube mentions at 0.737 correlation; brand web mentions at 0.664 (Ahrefs, 75K brands) |
| Proprietary data creation | When 3-5 citation slots is the total supply, unique data is the strongest differentiator | 3.2x citation rate for proprietary statistics (GenOptima, Q1 2026) |
The first step is knowing whether your brand appears at all. Sill's data across 139 brands and 86 industries shows that 23% of brands score zero SOV across all AI platforms. The browser-native AI shift does not change the underlying citation dynamics, but it expands the audience affected by them from early adopters who chose to use AI search to the general population whose browser made the choice for them. The infrastructure through which consumers discover brands has been rewired, and the brands that monitor, measure, and optimize for AI answer engines across platforms will be the ones that remain visible as browsers complete the transition from search engine launchers to answer engines.
Sill monitors your brand across six AI platforms daily, separating citations from mentions and tracking how browser-native AI changes affect your visibility over time.
Request your first analysis today to see where you stand.