Executive Summary

We scanned the homepages of 20 leading SaaS companies through 409 automated checks. Average AI Visibility: 29/100. Average Sigly Score: 54/100. Zero achieved “mature” AI readiness. The biggest gap isn’t access — most sites don’t block AI crawlers. It’s comprehension. When AI reads these pages, it finds navigation labels weighted as heavily as product value propositions, 90%+ framework noise, and no machine-readable freshness signals.

The Question Every SaaS Company Should Be Asking

When ChatGPT, Perplexity, or Claude recommends a tool to a potential customer, will they mention yours?

We scanned the homepages of 20 of the world’s most recognized SaaS companies — from Salesforce to Figma, Stripe to Notion — through Sigly’s AI Visibility analysis engine. The results reveal a striking disconnect: companies spending millions on brand, content, and SEO are largely invisible to the AI systems increasingly shaping how buyers discover and evaluate software.

The headline finding: not a single site in our study achieved “mature” AI readiness. The average Sigly Score across all 20 was just 54 out of 100, and the average AI Visibility score — measuring how effectively AI systems can access and comprehend site content — was only 29.

These are household names in tech. If they’re not ready, who is?

Key Data Points

Average Sigly Score
54 / 100
Average AI Visibility
29 / 100
Sites achieving “Mature” readiness
0 out of 20
Sites scanned
20 leading SaaS companies
Checks per site
409 automated checks

Full Rankings: The Sigly Score Leaderboard

The Sigly Score is a composite measure of a site’s overall readiness for AI-driven discovery, combining AI Visibility (can AI access and understand your content?) with Agentic Actionability (can AI agents act on what they find?).

RankDomainSigly ScoreAI VisibilityActionabilityMaturityTop Weakness
1webflow.com621670SolidDOM depth >25 levels + render-blocking scripts
2slack.com614661SolidStale critical content (no freshness signals)
3calendly.com612164Solid13% of pages have noindex directives
4figma.com604655SolidHTTP/HTTPS mixed content issues
5notion.so572961DevelopingCanonical inconsistencies + noindex directives
6semrush.com564656DevelopingImprecise language in content
7mailchimp.com553148DevelopingStale critical content (no freshness signals)
8stripe.com552758DevelopingStale critical content (no freshness signals)
9monday.com541657DevelopingDOM depth >25 levels + stale content
10asana.com543149DevelopingStale critical content (no freshness signals)
11zapier.com543157DevelopingMissing canonical tags
12typeform.com543152DevelopingHTTP/HTTPS mixed content issues
13zendesk.com521656DevelopingDOM depth >25 levels + stale content
14shopify.com523154DevelopingHTTP/HTTPS mixed content issues
15hubspot.com513141DevelopingRender-blocking scripts
16salesforce.com513141DevelopingHTTP/HTTPS mixed content issues
17intercom.com512950DevelopingStale content + 9% noindex pages
18canva.com513453DevelopingHomepage not accessible to crawlers
19ahrefs.com501641DevelopingDOM depth >25 levels
20miro.com411636EarlyDOM depth >25 levels + stale content

All 20 sites share the same universal penalty: extreme code bloat (under 10% token efficiency). The “Top Weakness” column shows each site’s most critical issue beyond that shared baseline.

Six Key Findings

1. Semantic Noise: When Everything Is a Heading, Nothing Is

The most revealing issue we uncovered isn’t that content is missing — it’s that it’s semantically flat. These sites have content, but AI systems can’t tell what matters.

Figma’s homepage returns 41 heading tags. But look at what shares the same h2 weight: the core value proposition “Prompt, code, and design from first idea to final product” sits at the same semantic level as the footer link “Company” and the navigation label “Resources.”

This pattern is everywhere. We analyzed the heading structure across a sample of sites in our study:

Siteh1h2h3h4+TotalNotable Issues
figma.com11624041Nav + footer labels as h2
slack.com1917027Duplicate h3s (mobile + desktop)
monday.com1355041Same headings rendered twice
stripe.com2517244824 h4 tags in a single page
hubspot.com141650107Mega-menu sections as h2/h3
webflow.com23618056Nav dropdowns as h2
miro.com42536469Same h1 repeated 4 times
sigly.app (reference)1144019Built following AI-first principles

Why this matters for AI: When an LLM crawls raw HTML to summarize what Figma does, it receives a noisy map. Heading hierarchy is how AI determines what’s important — it’s the document’s table of contents. By treating navigation labels with the same weight as product benefits, these sites create semantic noise that dilutes their brand authority in AI-generated answers.

HubSpot’s homepage contains 107 heading tags — mostly menu structure and navigation labels. Compare that with a page built following the principles we recommend: sigly.app’s homepage has 19 headings, nearly all describing what the product does, how it works, and what it costs. From an AI’s perspective, one reads like a clear document; the other reads like a spreadsheet with 107 equally-weighted rows.

Miro takes this further: its main heading — “Get from brainstorm to breakthrough with Miro” — appears four times as an h1, presumably for different viewport animations. To an AI system, four identical h1 tags don’t signal emphasis. They signal noise.

This is what we call Semantic Density — the ratio of meaningful, content-descriptive headings to total heading tags. A semantically dense page helps AI systems understand your product accurately. A semantically flat one forces AI to guess what matters.

2. The Token Tax: 90%+ of What AI Processes Is Noise

Beyond heading structure, 100% of the 20 sites received a critical penalty for extreme code bloat — less than 10% token efficiency.

When an AI model processes a webpage, it tokenizes the entire HTML. If 90%+ of those tokens are framework boilerplate, JavaScript bundles, CSS class names, and deeply nested div tags, the AI is spending its limited context window parsing noise instead of understanding your value proposition.

These SaaS homepages are, on average, over 90% markup and under 10% meaningful content from an AI’s perspective. Your value proposition, your differentiators, your pricing — all of it is paying a Token Tax to the framework that built the page, competing for attention inside a sea of artifacts that AI has to process before it gets to what matters.

This compounds with the semantic noise problem: not only is the important content buried in noise, but the heading structure that should help AI find it is itself noisy.

3. Neural Readability Is the Weakest Link

Our analysis separates AI Visibility into two layers: the Access Layer (can AI crawlers reach the content?) and the Neural Layer (can AI models actually comprehend it?).

The access layer averaged a decent 34.8 out of 40 — most sites aren’t actively blocking AI crawlers. But the neural layer averaged just 18 out of 60. That’s a 30% score on the component that matters most for AI comprehension.

The implication: these sites have the door open, but the content behind that door is buried under layers of framework complexity and semantic noise. Even when AI systems can access the page, they struggle to efficiently extract and retain the information that matters.

5 sites — webflow.com, monday.com, zendesk.com, ahrefs.com, miro.com — scored just 16 on AI Visibility overall, dragged down by content buried more than 25 DOM levels deep on top of the universal Token Tax penalty.

Why Miro Scored Lowest

Miro’s 41 is a 9-point gap below the next lowest score (Ahrefs at 50). Miro was hit by two compounding penalties: extreme code bloat (-20 points) and content buried over 25 DOM levels deep (-15 points). Combined with a neural layer score of just 15/60 and agentic readiness of only 13/100, Miro’s homepage — despite having a strong technical structure score of 85/100 — is effectively opaque to AI systems. Add four duplicate h1 tags and 69 total headings, and the AI receives a page that looks structurally rich but is semantically empty.

4. The Structured Data Gap Is Enormous

Structured data (Schema.org markup) is how you explicitly tell AI systems what your page is about, what your product does, and how to categorize your company. It’s the closest thing to a direct conversation with the AI.

  • 12/20 (60%) have FAQ-style content on their homepage but don’t mark it up with FAQ schema — a missed opportunity for AI models to extract and cite authoritative answers.
  • 11/20 (55%) are missing Product schema on pages that clearly describe products.
  • 6/20 (30%) lack a proper Organization schema, the foundational markup that tells AI who you are.

The average schema score was 63/100. Salesforce (88), Ahrefs (83), and HubSpot (80) led on schema implementation, while Semrush (48) and Shopify (40) lagged behind.

5. Most Sites Don’t Signal Content Freshness to Machines

Half the sites in our study (10 out of 20) were flagged for “stale critical content” — but this doesn’t mean their content is actually outdated.

What the check measures: we look at pages with titles indicating critical business content (pricing, product features, specifications) and check whether they expose a machine-readable last-modified date — either through the HTTP Last-Modified header or the sitemap lastmod field. Pages with no date or a date older than 6 months are flagged.

Sites like Stripe update their documentation frequently, but without machine-readable timestamps on those pages, AI systems have no way to know that. LLMs and AI crawlers increasingly weigh content recency as a trust signal when deciding what to cite and recommend. Without these signals, even regularly updated content may be treated as outdated.

The fix is straightforward: ensure your critical pages expose a Last-Modified HTTP header and include lastmod values in your sitemap for pricing, product, and feature pages.

6. Nobody’s Ready for AI Agents — And Maturity Is Zero

Beyond visibility to AI search, we measured Agentic Actionability — how well AI agents can understand and interact with your site. The average Actionability score was just 53/100.

  • HubSpot and Salesforce — two of the biggest CRM platforms — scored only 41 on actionability, with agentic readiness scores of just 20 out of 100.
  • Miro scored lowest at 36, with an agentic readiness of only 13.
  • Webflow led at 70, the only site to score above 65 on this pillar.

And perhaps the most telling finding: zero of the 20 sites achieved a “mature” maturity level. The distribution: Early 5%, Developing 75%, Solid 20%, Mature 0%. Only the top four — Webflow (62), Slack (61), Calendly (61), and Figma (60) — reached “Solid.” The remaining 15 are “Developing,” and Miro at 41 is the sole site still classified as “Early.”

This isn’t a failure of individual teams — it’s a symptom of the tooling. The frameworks that power modern SaaS marketing (Next.js, Nuxt, Gatsby, Webflow) were designed to ship beautiful, interactive websites for human browsers. They weren’t designed to produce clean, semantically structured documents for AI consumption. Until the frameworks themselves adapt to an AI-first index, every team building with them inherits these patterns by default.

The gap is real, and it’s an opportunity for anyone willing to address it first.

What This Means — And What You Can Do About It

This study isn’t about naming winners and losers. Every company in this list has a strong product and a well-built website — the average technical structure score across all 20 was 82/100. The issue isn’t quality. It’s that the web was designed for browsers, and the audience has expanded to include AI.

A note on JavaScript rendering

Google’s crawler has executed JavaScript since 2014, and some will argue this makes server-side HTML less important. But the AI landscape is broader than Google. LLM crawlers (GPTBot, Claude-Bot, PerplexityBot) typically do not execute JavaScript, or do so in a limited capacity to conserve compute. When we talk about AI Visibility, we’re talking about the full ecosystem of AI systems that may evaluate your site — not just traditional search engines.

Based on our findings, here are the highest-impact actions, in priority order:

  1. Fix your semantic hierarchy. Audit your heading structure. Navigation labels and footer links should not be h2 tags. Reserve heading tags for actual content — your value proposition, product features, pricing tiers, and FAQs. Consider implementing Server-Side Rendering (SSR) for public-facing pages so that crawlers and AI systems receive fully rendered, content-rich HTML.
  2. Reduce the Token Tax. Improve the signal-to-noise ratio of your HTML. Minimize deeply nested DOM structures and reduce CSS class proliferation. The goal: ensure your meaningful content represents a much larger share of what AI systems process.
  3. Add an llms.txt file. Give AI systems a clean, structured summary of your product without forcing them to parse HTML at all. Think of it as robots.txt for the AI era. It takes an afternoon to implement.
  4. Close the structured data gap. Add Organization, Product, and FAQ schema to your pages. This is how you explicitly tell AI what your page is about in a language it natively understands.
  5. Add machine-readable freshness signals. Ensure your critical pages (pricing, product, features) expose a Last-Modified HTTP header and include lastmod values in your sitemap.
  6. Start thinking about Agentic Readiness. As AI agents increasingly mediate purchasing decisions, the sites that are easiest for agents to navigate and extract information from will have a measurable advantage.

Methodology Note

This study was conducted using Sigly’s automated analysis pipeline, running 409 technical checks per domain across categories including crawl coverage, AI access directives, neural readability, structured data, content quality, trust signals, and agentic actionability. Each site was crawled (up to 15 pages per domain) and analyzed against its live homepage and discovered internal pages.

Sigly Scores range from 0-100 and represent composite AI readiness. AI Visibility is a sub-score measuring the combination of technical access (can AI crawlers reach content) and neural consumption (can AI models comprehend it). The maturity levels are derived from the Sigly Score combined with check severity: Mature (≥80, zero critical failures), Solid (≥60, ≤2 criticals), Developing (≥45), Early (below 45).

The Sigly Score applies weighted category scores across 22 dimensions plus penalties for critical structural issues. The two heaviest penalties are extreme code bloat (under 10% token efficiency, -20 points) and deep DOM nesting (content buried over 25 levels, -15 points). Five sites received both penalties simultaneously: Webflow, Monday, Zendesk, Ahrefs, and Miro.

The heading analysis in Finding #1 was conducted via direct HTTP requests to each site’s homepage using a standard browser user-agent string. All heading counts are reproducible using the curl commands shown in the article.

Check Your Own Score

Curious how your site compares? Run a free AI Visibility evaluation — same engine, same 409 checks, instant results.

See exactly what AI systems see when they look at your website.

This study was produced by Sigly, the AI Visibility monitoring platform. Sigly helps companies understand and improve how AI systems see their websites — and practices what it preaches with clean semantic hierarchy, SSR, llms.txt, and structured data on every public page.