As of April 2026, AI search engines are backing brand recommendations with source links 19.9% of the time, up from 15.7% three weeks ago. That 27% relative jump is the largest single-wave increase we have recorded across four weeks of continuous data collection. Perplexity drove most of it, more than doubling its brand citations from 4 to 10. Vercel had the highest per-brand citation rate of any tracked brand at 56%.
The first three waves held flat at 15-16%. This is the first meaningful departure from that baseline.
What We Track and How
We run the same 20 queries across 5 AI search engines (ChatGPT, Perplexity, Gemini, Grok, and Claude) every week. Each query goes out as a real-time API call, simulating how actual users interact with these platforms. We track 25 B2B SaaS brands across 5 categories: CRM, Project Management, Email Marketing, Analytics, and Dev Tools. Wave 4 data was collected on April 7, 2026, with 311 total brand mention instances across 5 engines.
A "citation" in our methodology means the engine included a source URL pointing to or about the brand, not just a mention in the response text. The distinction matters: you can be named in a response and never linked. We track both.
For context on how brand mentions and citations differ in practice, this breakdown of what makes AI engines choose one source over another covers the mechanics in detail.
The Data
The citation rate across all five engines and all 20 queries per wave:
| Wave | Date | Cited | Mentioned | Citation Rate |
|---|---|---|---|---|
| 1 | 2026-03-06 | 45 | 292 | 15.4% |
| 2 | 2026-03-10 | 45 | 301 | 15.0% |
| 3 | 2026-03-15 | 48 | 305 | 15.7% |
| 4 | 2026-04-07 | 62 | 311 | 19.9% |
Three weeks of flat. Then a jump to 19.9%.
The per-engine picture shows where the change came from:
| Engine | W1 Brands Cited | W2 | W3 | W4 |
|---|---|---|---|---|
| Perplexity | 4 | 5 | 4 | 10 |
| ChatGPT | 13 | 12 | 14 | 14 |
| Grok | 1 | 7 | 7 | 7 |
| Gemini | 6 | 6 | 5 | 10 |
| Claude | 6 | 6 | 6 | 4 |
Perplexity went from 4 brands cited to 10, a 150% increase in one wave. Gemini also doubled, from 5 to 10. ChatGPT held steady at 14. Claude dropped from 6 to 4, continuing a worsening trend we have noted separately. Grok stayed flat at 7.
So the aggregate jump is primarily Perplexity and Gemini. ChatGPT was already the citation leader by this metric. The question is whether Perplexity and Gemini's behavior reflects a durable shift or a one-wave spike.
The Perplexity Spike
Perplexity cited only 4 distinct brands in Wave 3. In Wave 4, it cited 10. That is not a small variation.
When we compare individual query responses, Perplexity's Wave 4 responses had noticeably more source links attached to brand mentions than in prior waves. For queries like "best alternative to Mailchimp" and "best platform for deploying web apps," Perplexity provided source URLs alongside nearly every brand it named. In prior waves, the same queries returned brand names without links.
We do not have a confirmed explanation for why Perplexity's citation behavior changed. It could be a retrieval algorithm update, a change in how its web search integration surfaces URLs, or just natural variance that happened to cluster on this wave. The Wave 3 to Wave 4 gap is large enough that we are tracking it carefully in Wave 5.
What we can say: brands that were mentioned but never cited on Perplexity in Waves 1-3 now have source links in their recommendations. That is a concrete change, whatever the cause.
The Vercel Case Study
Vercel had the highest per-brand citation rate of any tracked brand in Wave 4 at 56%. More than half of its AI mentions came with a direct source link. For context, Google Analytics, also mentioned by all 5 engines across analytics queries, had a citation rate of exactly 0% in Wave 4. Every engine named it, zero engines linked to it.
The difference between Vercel and Google Analytics is not brand size or awareness. Both are well-established products with extensive online documentation. The difference is content structure. Vercel's documentation, comparison pages, and developer-focused content are written in a format that AI engines can easily extract, attribute, and link. Google Analytics is mentioned as a baseline reference, not as a cited source.
This is the thing the citation rate increase makes more concrete: the engines that are now citing brands more aggressively are selecting which brands get links. Being in the conversation is not the same as being the source.
Third-party versus first-party citations covers the structural differences between content types that AI engines prefer to link, which maps directly to what we saw with Vercel here.
What This Means
A rising citation rate changes the value calculation for AI mentions. For the past three waves, a brand mention in an AI response was worth roughly the same regardless of engine or context: the brand got named, the user might search for it separately, no link was provided. That is changing.
If Perplexity and Gemini continue to cite more aggressively, the gap between "mentioned" and "cited" brands will widen. Right now, 80.1% of brand mentions across all engines still come with no source link. But the direction is toward more citations, not fewer.
The brands that have already invested in structured, citable content are positioned to capture those links as citation rates increase. Vercel's 56% citation rate is not luck. It reflects years of developer documentation, technical comparison content, and third-party coverage that gives engines something concrete to link.
Conversely, brands that show up only in aggregator roundups or product comparison databases may continue to be mentioned without citation. The third-party review site ecosystem accounts for 46-78% of all citation URLs across engines (Perplexity at 67.1%, Grok at 77.8%). If a brand is not in those sources, it is harder to link even when an engine recommends it.
One more observation on the Claude side: while overall citation rates rose, Claude's cited brand count dropped from 6 to 4. Claude mentioned 20 brands across 20 queries but never linked to a single brand-owned URL (0.0% brand own site). ChatGPT, by contrast, linked to brand websites 19.1% of the time. Rising aggregate citation rates do not mean every engine is moving in the same direction.
What You Can Do About It
- Audit which engines cite you vs. just mention you. The distinction between cited and mentioned is now more consequential than it was a month ago. If Perplexity mentions you but never links, that is actionable.
- Prioritize content that third-party review sites index. Between 46-78% of citation URLs across engines point to third-party reviews. If you are not on G2, Capterra, or equivalent, you are harder to cite even when recommended.
- Build structured comparison content. Vercel's 56% citation rate tracks with its library of technical comparison and deployment documentation. Engines link to content that directly answers the query, not just brand pages.
- Do not assume a mention is a citation. Google Analytics is a useful reminder: unanimous engine coverage, zero source links. Share of voice metrics that conflate mentions with citations will overstate your actual AI search presence.
- Track Perplexity and Gemini citation behavior specifically. These are the two engines that moved most in Wave 4. If the trend continues into Wave 5, they may overtake ChatGPT as the most citation-aggressive engines.
For a broader view of what citation patterns look like across engines and how they affect brand visibility, the March 2026 citation trends analysis has the full three-wave baseline this data is compared against.
Methodology
We ran 20 queries across 5 AI search engines: ChatGPT, Perplexity, Gemini, Grok, and Claude. Each query was sent as a real-time API call, simulating how actual users interact with these platforms. We tracked 25 B2B SaaS brands across 5 categories: CRM, Project Management, Email Marketing, Analytics, and Dev Tools. Citation rates are calculated as brand mentions with a source URL divided by total brand mentions across all engine-query pairs. Wave 4 data represents 311 brand mention instances across 100 engine-query combinations.
FAQ
What does "citation rate" mean in AI search? Citation rate measures how often an AI engine backs a brand recommendation with a source URL, not just a name mention. A citation rate of 19.9% means that out of every 100 times a brand was named across all engines and queries, it was linked to a source 19.9 times.
Why did the citation rate jump between Wave 3 and Wave 4? Perplexity and Gemini drove most of the increase. Perplexity went from citing 4 distinct brands to 10, a 150% jump in one wave. Gemini went from 5 to 10. ChatGPT held steady. The specific cause of Perplexity's change is not confirmed; it could reflect a retrieval algorithm update or increased web search integration in its responses.
Which AI engine has the highest citation rate? As of Wave 4, ChatGPT cites the most distinct brands (14 out of 25 tracked), giving it the highest brand-level citation coverage. Perplexity and Gemini both cited 10 brands in Wave 4, up from 4-5 in previous waves. Claude has the lowest citation rate, citing only 4 brands and never linking to a brand-owned website.
Does a high mention rate mean a brand is performing well in AI search? Not necessarily. Google Analytics had a 0% citation rate in Wave 4 despite being mentioned by all 5 engines in every analytics query. Vercel, cited in over half its mentions, is a better example of strong AI search performance. The distinction between being named and being linked is becoming more significant as overall citation rates rise.
How is this different from traditional SEO? In traditional SEO, a ranking determines whether users click to your site. In AI search, even if your brand is recommended, there may be no link for users to follow. The rising citation rate means AI engines are increasingly providing those links, making the content and source quality behind the mention more important than it was even a month ago.