Netlify appeared in 28 AI engine responses across two weekly waves, matched its direct competitor Vercel in both mention count and citation count, and earned zero position-1 placements. Then, in Wave 3, ChatGPT placed Netlify first for "best platform for deploying web apps," breaking a streak that had become the strongest structural pattern in our 25-brand dataset. Vercel's position-1 dominance dropped from 100% to 88% across three waves, but it still holds the top spot in 14 of 16 engine responses.
Visibility without authority is the most deceptive pattern in AI search. A brand can look healthy by mention count while capturing none of the buyer attention that position 1 delivers.
Why This Matters
These findings come from FogTrail's State of AI Citations study: 20 queries sent to 5 AI search engines (ChatGPT, Perplexity, Gemini, Grok, and Claude) via real-time API calls, tracking 25 B2B SaaS brands across 5 categories over three weekly waves. That is 300 engine-query pairs per wave, 900 total, simulating how actual buyers interact with these platforms. As of March 2026, the dataset reveals that being mentioned and being recommended first are fundamentally different outcomes, and most brands are measuring the wrong one.
For background on the broader disagreement patterns across AI engines, see our analysis of how AI engines disagree on the top recommendation in 50% of queries.
The Data: Same Mentions, Same Citations, Wildly Different Outcomes
Vercel and Netlify compete in the same Dev Tools category. Both are midmarket platforms. Both appeared on all 5 engines in every wave. By any dashboard that reports "number of AI mentions," these two brands looked equally healthy. They were not.
Wave-by-Wave Comparison
| Metric | Vercel (W1) | Vercel (W2) | Vercel (W3) | Netlify (W1) | Netlify (W2) | Netlify (W3) |
|---|---|---|---|---|---|---|
| Total mentions | 14 | 16 | 16 | 14 | 14 | 16 |
| Total citations | 6 | 7 | 6 | 6 | 6 | 5 |
| Engines (of 5) | 5 | 5 | 5 | 5 | 5 | 5 |
| Position-1 placements | 14/14 (100%) | 15/16 (94%) | 14/16 (88%) | 0/14 (0%) | 0/14 (0%) | 1/16 (6%) |
Netlify matched Vercel on every metric most marketers track: total appearances, citation counts, engine coverage. The only metric it lost on was the one that matters most to buyers: which brand the engine recommends first.
Position-1 Results Across All Dev Tools Queries
Here is the full position-1 data for all four Dev Tools queries across three waves:
| Query | Engine | W1 #1 | W2 #1 | W3 #1 |
|---|---|---|---|---|
| Best platform for deploying web apps | Perplexity | Vercel | Vercel | Vercel |
| ChatGPT | Vercel | Vercel | Netlify | |
| Gemini | Vercel | Vercel | Vercel | |
| Grok | Vercel | Vercel | Vercel | |
| Claude | Vercel | Vercel | Vercel | |
| Vercel vs Netlify comparison | Perplexity | Vercel | Vercel | Vercel |
| ChatGPT | Vercel | Vercel | Vercel | |
| Gemini | Vercel | Vercel | Vercel | |
| Grok | (none) | Vercel | Vercel | |
| Claude | Vercel | Vercel | Vercel | |
| Best hosting for Next.js apps | Perplexity | Vercel | Vercel | Vercel |
| ChatGPT | Vercel | Vercel | Vercel | |
| Gemini | Vercel | Vercel | Vercel | |
| Grok | Vercel | Vercel | Vercel | |
| Claude | Vercel | Vercel | Vercel | |
| Cheapest cloud hosting | Perplexity | (none) | (none) | (none) |
| ChatGPT | Fly.io | (none) | (none) | |
| Gemini | (none) | (none) | (none) | |
| Grok | (none) | Render | Render | |
| Claude | Railway | Railway | Railway |
Vercel held position 1 in every single Dev Tools response in Wave 1. Even in the direct head-to-head "Vercel vs Netlify comparison" query, all engines placed Vercel first. Netlify appeared second or third in those same responses, cited with direct URLs, with identical engine coverage, but never at the top.
The Three-Act Arc: Dominance, Confirmation, Breakthrough
Act 1: Total Dominance (Wave 1)
Vercel achieved something no other brand in the 25-brand dataset managed: 100% position-1 consensus across all engines for every relevant query. Two of those queries ("best platform for deploying web apps" and "best hosting for Next.js apps") produced 5/5 unanimous Vercel consensus, a level of agreement matched by only two other queries in the entire 20-query dataset.
Netlify's Wave 1 numbers looked fine on paper: 14 mentions, 6 citations, present on all 5 engines. But zero position-1 placements. For context, the startups that outrank their category leaders in our dataset all share one trait: they capture position 1 on at least some queries, not just mentions.
Act 2: Confirmed Pattern (Wave 2)
Wave 2 confirmed that Netlify's absence from position 1 was structural, not random. Zero-for-14 became zero-for-28 across two identical query runs. Vercel dipped slightly to 94% (one #1 lost when Grok returned no tracked brand for the comparison query in W1), but the gap was not closing.
This is where temperature-based sampling matters. AI engines use stochastic generation, meaning the same query can produce different results on different runs. If Netlify's absence from position 1 were just bad luck, you would expect it to appear at #1 at least occasionally across 28 opportunities. It did not. That made the pattern the strongest structural finding in the dataset.
Act 3: The Breakthrough (Wave 3)
Then Netlify broke through. In Wave 3, ChatGPT listed Netlify first for "best platform for deploying web apps," placing it ahead of Vercel for the first time in the study. ChatGPT's response included a cited URL to netlify.com. All four other engines continued to place Vercel first for the same query.
One appearance at position 1 out of 44 total opportunities across three waves. A 2.3% rate. But it happened, and it happened on the most volatile engine in the dataset. ChatGPT's brand citation count swung from 23 to 12 to 14 across the three waves. Its volatility score of 39% is the highest of any engine, compared to Claude's 8%.
What the Engines Actually Said
When we asked ChatGPT "best platform for deploying web apps" in Wave 3, it led with Netlify and described its strengths before moving to Vercel. In Waves 1 and 2, the same query on ChatGPT opened with Vercel every time.
On the same Wave 3 query, Perplexity led with Vercel, followed by Netlify. Gemini opened with Vercel and listed Netlify second with a brand-own-site citation. Claude listed Vercel first, followed by Netlify, Railway, Render, and Fly.io. Grok also led with Vercel. The consensus still heavily favors Vercel, but the wall is no longer unbreakable.
Even in the head-to-head "Vercel vs Netlify comparison" query, all engines placed Vercel first across all three waves. Netlify's breakthrough was query-specific and engine-specific, not a broad repositioning.
What This Means
Three implications stand out from this data.
First, mention count is a vanity metric in AI search. Netlify matched Vercel on mentions, citations, and engine coverage for three straight weeks. None of that translated into position-1 authority. A marketing dashboard that reports "you were mentioned by 5 AI engines" would have shown Netlify as performing identically to Vercel. It was not. As of March 2026, the gap between "mentioned" and "recommended first" is the most under-measured dimension of AI visibility.
Second, position-1 dominance erodes slowly, not suddenly. Vercel went from 100% to 94% to 88% across three waves. That is a consistent decline, but it took three weeks and 44 engine-query pairs for a single competitor breakthrough. Brands with strong position-1 authority should not panic over small dips, but they should track the trend. A steady decline, even a slow one, is a signal that competitor authority is building.
Third, breakthroughs happen on volatile engines first. Netlify's only #1 came on ChatGPT, the engine with the most divergent behavior across the dataset. ChatGPT-Gemini pairwise overlap dropped to 58% in Wave 3, the lowest agreement between any two engines in the entire study. If you are trying to break into position 1, ChatGPT may be the most receptive engine, but it is also the one where gains are hardest to confirm as durable.
Fourth, the pattern is not unique to dev tools. Other categories show similar dynamics. In CRM, HubSpot and Salesforce trade #1 positions across engines, but Salesforce holds a structural edge in raw citation count. In Project Management, four brands rotate #1 placements with zero consensus across two consecutive waves. The gap between being mentioned and being recommended first exists across the entire dataset.
What You Can Do About It
Breaking out of the visibility-without-authority pattern requires measuring position-1 rates (not just mention counts), benchmarking against direct competitors, and targeting volatile engines where breakthroughs are most likely.
- Track position, not just presence. Mention counts tell you whether engines know your brand exists. Position-1 tracking tells you whether they recommend it. These are different metrics with different implications for buyer behavior.
- Benchmark against your direct competitor's position-1 rate. Netlify's problem was invisible unless you compared its positioning data to Vercel's. In isolation, its numbers looked healthy.
- Monitor across multiple waves. A single snapshot would have told Netlify it was mentioned by all 5 engines. Three waves revealed a 0-for-28 structural disadvantage that eventually cracked. Weekly or bi-weekly tracking surfaces patterns that single checks miss entirely.
- Target the most volatile engine for breakthroughs. ChatGPT shows the most week-to-week variation in brand recommendations. If you are locked out of position 1 across all engines, ChatGPT may be the first place a breakthrough appears.
- Invest in the signals that drive position, not just mention. Niche positioning, strong documentation, and community reputation are what separate brands that get mentioned from brands that get recommended first.
Methodology
We ran 20 queries across 5 AI search engines (ChatGPT, Perplexity, Gemini, Grok, and Claude) via real-time API calls over three weekly waves (March 6, 10, and 15, 2026), simulating how actual users interact with these platforms. We tracked 25 B2B SaaS brands across 5 categories (CRM, Project Management, Email Marketing, Analytics, and Dev Tools), recording position-1 placement, total mentions, and citation URLs for each engine-query pair.
Frequently Asked Questions
What does "visibility without authority" mean in AI search?
Visibility without authority means a brand appears in AI engine responses, sometimes even cited with URLs, but is never placed at position 1. The brand is visible to the engine's knowledge base, but the engine does not treat it as the top recommendation. In our dataset, Netlify had identical mention counts and citation counts to Vercel but zero position-1 placements across 28 consecutive engine-query pairs before finally breaking through once.
Can a brand break out of a permanent #2 position in AI search?
Yes, but it is rare and engine-specific. Netlify broke a 0-for-28 streak in Wave 3, earning its first position-1 placement on ChatGPT. It remained at #2 or lower on all four other engines. Breakthroughs appear to happen first on the most volatile engine (ChatGPT, with a 39% volatility score) and may take multiple weeks to surface.
How should I measure AI search performance if mention count is unreliable?
Track position-1 rate alongside mention count and citation count. Position 1 captures disproportionate buyer attention in AI responses. Compare your position data against your direct competitor's, not just your own absolute numbers. A brand with 16 mentions and zero position-1 placements is in a fundamentally different situation than a brand with 16 mentions and 14 position-1 placements.
Does having more citations help a brand reach position 1?
Not necessarily. Netlify had 6 citations in Wave 1, matching Vercel exactly, and still held zero position-1 placements. Citation count reflects whether AI engines link to your domain. Position 1 reflects whether they recommend your brand first. These appear to be driven by different signals: citation count correlates with having linkable content (documentation, pricing pages), while position 1 correlates with perceived category authority and community reputation.
How long does it take for AI search positions to change?
Based on three waves of data, position-1 dominance erodes slowly. Vercel's rate declined from 100% to 94% to 88% over three weeks. Netlify's breakthrough took 44 engine-query pairs across three waves before appearing once. Position changes in AI search are measured in weeks, not days, and a single breakthrough does not indicate a trend until confirmed across subsequent waves.
Related Resources
- We Asked 5 AI Engines the Same 20 Questions. They Disagreed on the #1 Answer 50% of the Time.
- 3 Startups That Outrank Their Category Leaders in AI Search
- We Ran the Same 20 Queries 3 Times Across 5 AI Engines. Here's How Much the Results Changed Each Week.
- Analytics Consensus Surged From 25% to 75% While Project Management Is Stuck at 0%
- How to Measure AI Visibility