A marketing team publishes a statistics-rich case study: a named client, specific outcome numbers, three months of data. They have read that pages with 19 or more data points earn 93% more AI citations; they did the work. The case study goes live in WordPress at 9 AM. The question they cannot answer is when this content reaches ChatGPT, Gemini, and Perplexity, and whether it changes how often those platforms recommend their brand. That question is now answerable. Sill connects to 10 CMS platforms and crawls any site not backed by a CMS, feeding every content change directly into the AI visibility pipeline that tracks how updates propagate to actual platform recommendations.
TL;DR
Sill connects to 10 CMS platforms — WordPress, Shopify, Ghost, Contentful, Sanity, Strapi, Webflow, HubSpot, Wix, and Squarespace — plus a crawl-based fallback for any site not backed by a CMS. Content changes are timestamped against the brand's SOV timeline, making the lag between publishing and AI visibility impact measurable for the first time. Pages updated within 90 days earn 67% more AI citations (SE Ranking); detection closes the loop between on-site changes and Share of Voice outcomes.

Pages updated within 90 days earn 67% more AI citations (6.0 vs 3.6 per SE Ranking); content freshness is the 5th-ranked GEO tactic by evidence strength.
AI platforms do not operate on static snapshots of the web. ChatGPT's retrieval-augmented generation layer, Perplexity's real-time web indexing, and Google's AI Overviews all draw on source content that is continuously re-indexed. SE Ranking's analysis of content freshness across 2.3 million pages found that pages updated within 90 days earn 6.0 AI citations on average, versus 3.6 for pages outside that window, a 67% gap. The implication is direct: when a brand publishes a new page or updates an existing one, the change is a signal, not a static event.
The challenge has been connecting the publishing event to the visibility outcome. Before content-change detection, AI visibility platforms saw the downstream effect in Share of Voice trends but had no structured record of what changed or when. The change and the outcome lived in different systems. A case study published in a headless CMS on Tuesday was invisible to the monitoring pipeline until the next scheduled SOV run captured it as ambient data, with no record of the publish event that triggered the change.
Content detection closes that gap. When Sill receives a webhook from Contentful at the moment of publish, logs a polling delta from a Shopify storefront, or detects a crawl difference on a custom site, that event is timestamped and recorded against the brand's visibility timeline. The content change and the SOV change are now in the same data model. For the 87% of GEO recommendations that are on-site fixes (748 recommendations across 62 brands in Sill's pipeline), this is the mechanism that closes the measurement loop.
Sill uses three detection tiers: webhooks (real-time, sub-minute), polling (near-real-time, hourly), and crawl scheduling (daily or weekly for sites without APIs).
Not every CMS has the same integration surface. A Webflow site can trigger a webhook on every publish event; a static HTML site has no event system at all. Sill's detection architecture is tiered to match the integration surface of each platform, trading latency for compatibility as the surface shrinks.
| Tier | Mechanism | Latency | Best For |
|---|---|---|---|
| Webhook | CMS pushes an event to Sill on every publish or update | Sub-minute | WordPress, Webflow, Ghost, Contentful, Strapi, HubSpot |
| Poll | Sill queries the CMS API on a scheduled interval and diffs the result | Hourly | Shopify, Wix, Squarespace, Sanity |
| Crawl | Sill fetches and diffs rendered HTML on a configured schedule | Daily or weekly | Any site: no CMS required, custom builds, static generators |
The crawl tier is the fallback for every site that cannot use a webhook or polling API, but it is also the default for a meaningful share of brand websites. Many B2B brands run custom-built sites, static site generators like Next.js or Hugo, or legacy platforms with no content API. For these, the crawl tier provides the same visibility-event linkage at a longer latency. Crawl frequency is configurable per domain: daily for high-velocity publishing teams, weekly for brands with slower content cycles.
Sill supports 10 CMS platforms natively: WordPress, Shopify, Ghost, Contentful, Sanity, Strapi, Webflow, HubSpot, Wix, and Squarespace.
The 10 platforms cover the substantial majority of brand websites. WordPress alone powers 43% of the web; Shopify holds 28% of the U.S. e-commerce market. Together, the platforms below account for the publishing infrastructure of most B2B and B2C brands that would run an AI visibility monitoring program.
| Platform | Type | Detection Tier | Setup |
|---|---|---|---|
| WordPress | Traditional CMS | Webhook | Install plugin, paste webhook URL |
| Shopify | Commerce platform | Poll | API key from Shopify admin |
| Ghost | Publishing CMS | Webhook | Add webhook in Ghost admin settings |
| Contentful | Headless CMS | Webhook | Webhook + space/environment credentials |
| Sanity | Headless CMS | Poll | Project ID + read token |
| Strapi | Headless CMS | Webhook | Strapi API token + webhook endpoint |
| Webflow | Visual CMS | Webhook | Site ID + API token from Webflow dashboard |
| HubSpot | Marketing platform | Webhook | Private app token from HubSpot settings |
| Wix | Website builder | Poll | API key from Wix developer settings |
| Squarespace | Website builder | Poll | API key from Squarespace developer panel |
The connection wizard auto-detects the CMS from a URL in most cases, reducing setup to credential entry rather than platform identification. For webhook-based platforms, Sill generates a signed endpoint URL; credentials are encrypted at rest and never logged in plaintext. The test-connection endpoint validates credentials before saving, so misconfigured integrations surface at setup time rather than silently failing overnight.
For sites with no CMS, Sill crawls rendered HTML on a configurable schedule and diffs the content delta. No API or plugin required, works on any public URL.
A significant share of B2B brands run websites that do not map to a recognized CMS: custom-built Next.js sites, legacy PHP applications, static site generators, or documentation systems like Docusaurus and Gitbook. These sites publish content with the same frequency as CMS-backed sites; they simply have no event system to tap. The crawl tier handles them without modification.
Setup requires a domain URL and a crawl schedule. Sill fetches the rendered HTML at each interval, diffs it against the prior snapshot, and records any detected delta as a content-change event against the brand's timeline. URL pattern mapping allows teams to scope detection to specific page types (blog paths, product pages, documentation sections) so that changes to navigation or footer elements do not generate noise in the visibility pipeline.
The crawl tier also serves as a fallback for CMS-backed sites during webhook or API outages. If a platform's API becomes temporarily unavailable, the crawl schedule continues to run, ensuring that content-change detection does not silently break during maintenance windows or rate-limit events. Connections can be paused and resumed without losing configuration, which matters for teams with seasonal publishing cycles or planned freezes.
A detected content change triggers Sill's pipeline to correlate the publish timestamp against SOV trend data, making the content-to-visibility lag measurable for the first time.
Content detection is the intake layer; the visibility pipeline is what makes that intake actionable. When Sill logs a content-change event, it is timestamped in the brand's history alongside daily SOV runs. For brands tracking Share of Voice across ChatGPT, Gemini, Perplexity, and Google AI Overviews, this creates a timeline where content changes and visibility shifts appear in the same view. The lag between publishing a page and observing its effect in AI platform responses, previously invisible, becomes measurable.
The GEO research base offers context for what to expect. The foundational GEO paper (Aggarwal et al., KDD 2024) tested nine optimization methods across 10,000 queries and found that statistics addition improved AI visibility by 30-40%. Wu et al. (CMU 2025) found AutoGEO-optimized content achieved a 35.99% visibility improvement. Both findings involve on-site content changes; both require a before-and-after measurement structure to evaluate. Content detection provides the “before” timestamp that makes the after-state interpretable.
Citation source volatility makes this timeline critical. Sill's monitoring data shows that 40-60% of the specific URLs cited by AI platforms change month-over-month, independent of any content changes by the brand. Without a content-change record, a visibility shift following a platform update looks identical to a visibility shift following a published case study. The detection layer is the control input that separates deliberate intervention from background noise. As we documented in the GEO measurement gap analysis, before-and-after SOV comparisons without control inputs are the structural weakness of current GEO measurement; content detection is one of the inputs required to build that control.
For teams implementing the on-site GEO tactics we ranked in 12 GEO Tactics Ranked by Scientific Evidence, content detection is the infrastructure that makes those tactics measurable at the brand level rather than only across aggregate studies. The 23% of brands currently scoring zero Share of Voice across all AI platforms need to know whether their on-site changes are working; detection gives them the starting timestamp for that measurement.
Sill connects to 10 CMS platforms and crawls any site without one. Every content change is recorded against your SOV timeline so you can see what your publishing decisions actually do to your AI visibility.
Request your first analysis today to see where you stand.