This Brand Had 28 AI Mentions and Zero #1 Spots. Then It Finally Broke Through. Here's What Changed.
A midmarket B2B SaaS brand appeared in 28 AI engine responses across two weekly waves, matching its direct competitor in both mention count and citation count, without earning a single position-1 recommendation. Then, in the third wave, it finally broke through on one engine, ending a 0-for-28 streak that had become the strongest structural pattern in our dataset. The competitor's position-1 dominance dropped from 100% to 88% across three waves, but it still holds the top spot in 14 of 16 responses.
Visibility without authority is the most deceptive pattern in AI search. A brand can look healthy by mention count while capturing zero of the buyer attention that position 1 delivers.
Context: Why This Matters
These findings come from FogTrail's State of AI Citations study: 20 queries sent to 5 AI search engines (ChatGPT, Perplexity, Gemini, Grok, and Claude) via real-time API calls, tracking 25 B2B SaaS brands across 5 categories over three weekly waves. That is 300 engine-query pairs per wave, 900 total, simulating how actual buyers interact with these platforms. As of March 2026, the dataset reveals that being mentioned and being recommended first are fundamentally different outcomes, and most brands are measuring the wrong one.
For background on the broader disagreement patterns across AI engines, see our analysis of how AI search engines disagree on the top recommendation in 50% of queries.
The Data: Same Mentions, Same Citations, Wildly Different Outcomes
The two brands in this case study compete in the same Dev Tools category. We will call them Brand A (the dominant one) and Brand B (the perpetual runner-up) before revealing their identities with the full data.
Wave-by-Wave Position-1 Breakdown
| Metric | Brand A (W1) | Brand A (W2) | Brand A (W3) | Brand B (W1) | Brand B (W2) | Brand B (W3) |
|---|---|---|---|---|---|---|
| Total mentions | 14 | 16 | 16 | 14 | 14 | 16 |
| Total citations | 6 | 7 | 6 | 6 | 6 | 5 |
| Engines (of 5) | 5 | 5 | 5 | 5 | 5 | 5 |
| Position-1 placements | 14/14 (100%) | 15/16 (94%) | 14/16 (88%) | 0/14 (0%) | 0/14 (0%) | 1/16 (6%) |
Brand B matched Brand A on every metric that most marketers track: total appearances, citation counts, engine coverage. By any dashboard that reports "number of AI mentions," these two brands looked equally healthy. They were not.
The Reveal: Vercel vs. Netlify
Brand A is Vercel. Brand B is Netlify.
Here is the full position-1 data for all four Dev Tools queries across three waves:
| Query | Engine | W1 #1 | W2 #1 | W3 #1 |
|---|---|---|---|---|
| Best platform for deploying web apps | Perplexity | Vercel | Vercel | Vercel |
| ChatGPT | Vercel | Vercel | Netlify | |
| Gemini | Vercel | Vercel | Vercel | |
| Grok | Vercel | Vercel | Vercel | |
| Claude | Vercel | Vercel | Vercel | |
| Vercel vs Netlify comparison | Perplexity | Vercel | Vercel | Vercel |
| ChatGPT | Vercel | Vercel | Vercel | |
| Gemini | Vercel | Vercel | Vercel | |
| Grok | (none) | Vercel | Vercel | |
| Claude | Vercel | Vercel | Vercel | |
| Best hosting for Next.js apps | Perplexity | Vercel | Vercel | Vercel |
| ChatGPT | Vercel | Vercel | Vercel | |
| Gemini | Vercel | Vercel | Vercel | |
| Grok | Vercel | Vercel | Vercel | |
| Claude | Vercel | Vercel | Vercel | |
| Cheapest cloud hosting | Perplexity | (none) | (none) | (none) |
| ChatGPT | Fly.io | (none) | (none) | |
| Gemini | (none) | (none) | (none) | |
| Grok | (none) | Render | Render | |
| Claude | Railway | Railway | Railway |
Vercel held position 1 in every single Dev Tools response in Wave 1, a 100% rate across 14 appearances. Even in the direct head-to-head "Vercel vs Netlify comparison" query, all engines placed Vercel first. Netlify appeared second or third in those same responses, cited with direct URLs, with identical brand coverage, but never at the top.
The Three-Act Arc: Dominance, Confirmation, and Breakthrough
Act 1: Established Dominance (Wave 1)
Vercel achieved something no other brand in the 25-brand dataset managed: 100% position-1 consensus across all engines for every relevant query. It was one of only four queries in the entire dataset where all five engines agreed unanimously ("best platform for deploying web apps" and "best hosting for Next.js apps" both produced 5/5 Vercel consensus).
Netlify's Wave 1 stats looked fine on paper: 14 mentions, 6 citations, present on all 5 engines. But zero position-1 placements. For context on why this matters, the startups that outrank their category leaders in our dataset all share one trait: they capture position 1 on at least some queries, not just mentions.
Act 2: Confirmed Pattern (Wave 2)
Wave 2 confirmed that Netlify's absence from position 1 was structural, not random. Zero-for-14 became zero-for-28 across two identical query runs. Vercel dipped slightly to 94% (losing one #1 placement), but the pattern held. The gap was not closing.
This is where temperature-based sampling matters. AI engines use stochastic generation, meaning the same query can produce different results on different runs. If Netlify's absence from position 1 were just bad luck, you would expect it to appear at #1 at least occasionally across 28 opportunities. It did not. That made it the strongest structural pattern in the dataset.
Act 3: The Breakthrough (Wave 3)
Then Netlify broke through. In Wave 3, ChatGPT listed Netlify first for "best platform for deploying web apps," placing it ahead of Vercel for the first time in the study. ChatGPT's response included a cited URL to netlify.com. All four other engines continued to place Vercel first for the same query.
One appearance at position 1 out of 44 total opportunities across three waves. A 2.3% rate. But it happened, and it happened on the engine most prone to week-to-week variation. ChatGPT's brand citation count has swung from 23 to 12 to 14 across the three waves, making it the most volatile engine in the dataset by a wide margin.
Supporting Evidence: What the Engines Actually Said
When we asked ChatGPT "best platform for deploying web apps" in Wave 3, it led with Netlify and described its strengths before moving to Vercel. In Waves 1 and 2, the same query on ChatGPT opened with Vercel every time.
On the same Wave 3 query, Perplexity led with Vercel, followed by Netlify. Gemini opened with Vercel and listed Netlify second with a brand-own-site citation. Claude listed Vercel first, followed by Netlify, Railway, Render, and Fly.io. Grok also led with Vercel. The consensus still heavily favors Vercel, but the wall is no longer unbreakable.
Even in the head-to-head "Vercel vs Netlify comparison" query, all engines that returned structured data placed Vercel first across all three waves. This suggests that Netlify's breakthrough was query-specific and engine-specific, not a broad repositioning.
What This Means: Position 1 Is Structurally Sticky, but Not Permanent
Three implications stand out from this data.
First, mention count is a vanity metric in AI search. Netlify matched Vercel on mentions, citations, and engine coverage for three straight weeks. None of that translated into position-1 authority. A marketing dashboard that reports "you were mentioned by 5 AI engines" would have shown Netlify as performing identically to Vercel. It was not. As of March 2026, the gap between "mentioned" and "recommended first" is the most under-measured dimension of AI visibility.
Second, position-1 dominance erodes slowly, not suddenly. Vercel went from 100% to 94% to 88% across three waves. That is a consistent decline, but it took three weeks and 44 engine-query pairs for a single competitor breakthrough. Brands with strong position-1 authority should not panic over small dips, but they should track the trend. A steady decline, even a slow one, is a signal that competitor authority is building.
Third, breakthroughs happen on volatile engines first. Netlify's only #1 came on ChatGPT, the engine with the most divergent behavior across the dataset. ChatGPT-Gemini pairwise overlap dropped to 58% in Wave 3, the lowest agreement between any two engines in the entire study. If you are trying to break into position 1, ChatGPT may be the most receptive engine, but it is also the one where gains are hardest to confirm as durable.
What You Can Do About It
Breaking out of the visibility-without-authority pattern requires measuring position-1 rates (not just mention counts), benchmarking against direct competitors, and targeting volatile engines where breakthroughs are most likely.
- Track position, not just presence. Mention counts tell you whether engines know your brand exists. Position-1 tracking tells you whether they recommend it. These are different metrics with different implications for buyer behavior.
- Benchmark against your direct competitor's position-1 rate, not your own mention count. Netlify's problem was invisible unless you compared its positioning data to Vercel's. In isolation, its numbers looked healthy.
- Monitor across multiple waves. A single snapshot would have told Netlify it was mentioned by all 5 engines. Three waves revealed a 0-for-28 structural disadvantage. Weekly or bi-weekly tracking surfaces patterns that single checks miss entirely.
- Target the most volatile engine for breakthroughs. ChatGPT shows the most week-to-week variation in brand recommendations. If you are locked out of position 1 across all engines, ChatGPT may be the first place a breakthrough appears.
- Invest in the signals that drive position, not just mention. As our data on what it takes for startups to outrank category leaders shows, niche positioning, strong documentation, and community reputation are what separate brands that get mentioned from brands that get recommended first.
Methodology
We ran 20 queries across 5 AI search engines (ChatGPT, Perplexity, Gemini, Grok, and Claude) via real-time API calls over three weekly waves (March 6, 10, and 15, 2026), simulating how actual users interact with these platforms. We tracked 25 B2B SaaS brands across 5 categories (CRM, Project Management, Email Marketing, Analytics, and Dev Tools), recording position-1 placement, total mentions, and citation URLs for each engine-query pair. Position 1 is defined as the first tracked brand named in the engine's response.
Frequently Asked Questions
What does "visibility without authority" mean in AI search?
Visibility without authority means a brand appears in AI engine responses (mentioned, even cited with URLs) but is never placed at position 1. The brand is visible to the AI engine's knowledge base, but the engine does not treat it as the top recommendation. In our dataset, Netlify had identical mention counts and citation counts to Vercel but zero position-1 placements across 28 consecutive engine-query pairs.
Can a brand break out of a permanent #2 position in AI search?
Yes, but it is rare and engine-specific. In our study, Netlify broke a 0-for-28 streak in Wave 3, earning its first position-1 placement on ChatGPT. However, it remained at #2 or lower on all four other engines. Breakthroughs appear to happen first on the most volatile engine (ChatGPT) and may take multiple weeks to surface.
How should I measure AI search performance if mention count is unreliable?
Track position-1 rate alongside mention count and citation count. Position 1 captures disproportionate buyer attention in AI responses. Compare your position data against your direct competitor's, not just your own absolute numbers. A brand with 16 mentions and zero position-1 placements is in a fundamentally different situation than a brand with 16 mentions and 14 position-1 placements.
Does having more citations help you reach position 1?
Not necessarily. Netlify had 6 citations in Wave 1, matching Vercel exactly, and still held zero position-1 placements. Citation count reflects whether AI engines link to your domain. Position 1 reflects whether they recommend your brand first. These are driven by different signals: citation count correlates with having linkable content (documentation, pricing pages), while position 1 appears to correlate with perceived category authority and community reputation.
How long does it take for AI search positions to change?
Based on three waves of data, position-1 dominance erodes slowly. Vercel's rate declined from 100% to 94% to 88% over three weeks. Netlify's breakthrough took 44 engine-query pairs (across three waves) before appearing once. Position changes in AI search are measured in weeks, not days, and a single breakthrough does not indicate a trend until confirmed across subsequent waves.