Back to blog
AEOAI CitationsOriginal ResearchState of AI Citations
FogTrail Team·

Analytics Consensus Surged From 25% to 75% in 3 Waves While Project Management Is Stuck at 0%

AI search engines are converging on analytics recommendations at three times the rate of project management recommendations. Across three weekly waves of identical queries sent to ChatGPT, Perplexity, Gemini, Grok, and Claude, analytics consensus (4+ engines agreeing on a #1 brand) improved from 25% to 75% of queries. Project management consensus dropped to 0% and has stayed there for two consecutive waves.

This divergence has direct implications for any B2B brand competing in either category. If you sell analytics software, the window to establish yourself as a default AI recommendation is closing fast. If you sell project management tools, the window is wide open, but that also means your competitors have the same opportunity.

The Data: Category Consensus Across Three Waves

We ran the same 20 queries across 5 AI search engines once per week for three weeks, tracking which brand each engine recommended first. Consensus means 4 or more of the 5 engines agreed on the same #1 recommendation. Here is how each B2B SaaS category performed.

CategoryW1 Consensus (4+/5)W2 ConsensusW3 ConsensusTrend
Analytics1/4 (25%)2/4 (50%)3/4 (75%)Improving steadily
Dev Tools3/4 (75%)3/4 (75%)3/4 (75%)Stable
Email Marketing2/4 (50%)2/4 (50%)2/4 (50%)Stable
CRM3/4 (75%)3/4 (75%)1/4 (25%)Declining sharply
Project Management1/4 (25%)0/4 (0%)0/4 (0%)Stuck at zero

Two categories are moving in opposite directions. Analytics climbed from 25% to 75%. Project management fell from 25% to 0% and stayed there. The other three categories tell their own stories: Dev Tools is locked in at 75% (Vercel owns that space), Email Marketing holds steady at 50%, and CRM, which looked stable for two waves, cracked in Wave 3 as HubSpot and Salesforce began trading positions across engines.

Analytics: Amplitude Locked In Unanimous Consensus

The analytics category showed the cleanest improvement trajectory in the entire dataset. Here is the query-by-query breakdown.

QueryW1W2W3
Analytics for SaaSAmplitude (3/5)Mixpanel (2/5)Amplitude (3/5)
Analytics comparisonSplit (2/5)Amplitude (4/5)Amplitude (5/5)
Alternative to GAGA (5/5)GA (5/5)GA (5/5)
Analytics for startupsGA (3/5)GA (4/5)GA (5/5)

The comparison query is the standout. As of March 2026, when users ask AI engines to compare product analytics platforms, all five engines now recommend Amplitude first. That is unanimous, 5 out of 5 consensus. Google Analytics similarly achieved 5/5 consensus on both the "alternative to GA" query (where it benefits from the incumbent advantage pattern) and the "analytics for startups" query.

Only one analytics query remains contested: "best analytics tool for SaaS," where Amplitude leads at 3/5 but Mixpanel still holds two engines. Even there, the trend is toward consolidation, not fragmentation.

Project Management: No Brand Can Break Through

Project management is the mirror image. Not a single PM query produced consensus in Wave 2 or Wave 3. No brand achieved even 3 out of 5 agreement on any query in any wave.

QueryW1W2W3
PM for engineering teamsMonday.com (2/4)Linear (2/5)Asana (2/5)
PM software to useSplitSplit (2/5 each)ClickUp (2/5), Asana (2/5)
Alternative to JiraLinear (3/5)Split (2/5 each)Linear (2/5), ClickUp (2/5)
Lightweight PM for startupsAsana (4/5)Split (2/5 each)ClickUp (2/5), Asana (2/5)

The closest PM ever came to consensus was Wave 1's "lightweight PM for startups" query, where Asana held 4/5 agreement. That collapsed by Wave 2 and has not recovered. Each query now produces two or more brands tied at 2/5, with the leading brand rotating from wave to wave. Asana, Monday.com, ClickUp, and Linear each take turns at position 1 depending on which engine you ask.

When we asked ChatGPT "best PM tool for engineering teams," it recommended Asana. Gemini recommended Linear. Grok picked ClickUp. Claude went with Monday.com. Four engines, four different answers, for the same question.

CRM: The Category That Cracked

CRM had been one of the two most stable categories alongside Dev Tools, holding 3/4 consensus across Waves 1 and 2. Then it dropped to 1/4 in Wave 3, the sharpest single-wave decline in the dataset.

The cause: Salesforce and HubSpot are trading positions across engines. Perplexity now leads with HubSpot for 3 of 4 CRM queries, making it the most HubSpot-aligned engine. Claude leads with Salesforce for 3 of 4 queries. The two engines have developed opposite CRM biases, and neither brand has established durable dominance.

This is a warning for any category that looks "settled." As of March 2026, even categories with clear market leaders can fragment in AI search. The stability that CRM showed for two waves was not structural. It was temporary.

Dev Tools and Email Marketing: Holding Steady

Dev Tools maintained 3/4 consensus across all three waves. Vercel holds position 1 in 88% of responses, though that number has slowly declined from 100% in Wave 1. Netlify earned its first ever #1 position in Wave 3, but it was a single engine (ChatGPT) on a single query.

Email Marketing held at 2/4 consensus throughout. Mailchimp dominates broader email queries, but Beehiiv achieved majority consensus (3/5 engines) for newsletter-specific queries in Wave 3, flipping Gemini from Mailchimp to Beehiiv. The category is stable at the macro level but shifting on niche queries.

What This Means

The category consensus data reveals something that overall consensus metrics hide. The aggregate strong-or-better consensus rate oscillated between 50% and 55% across three waves, looking flat. But that flat line masks dramatic movement within categories. Analytics is solidifying. PM is fragmenting. CRM just destabilized. These are different problems requiring different strategies.

For analytics challengers like Heap (which has 12 mentions but only 1 citation across Wave 3), the position of brands like Amplitude and Google Analytics is becoming structural. When all five engines agree on a recommendation, displacing that recommendation requires a fundamentally different approach than competing in a fragmented space. The time for aggressive AEO investment in analytics was Wave 1, not Wave 3.

For PM brands, the calculus is reversed. No incumbent has locked in the #1 spot. Any of the four major PM brands (Asana, Monday.com, ClickUp, Linear) could potentially capture consensus with sustained, focused effort. The engines have not made up their minds. That is both the opportunity and the risk: if you do not capture that consensus, a competitor will.

The CRM crack is perhaps the most instructive case. It shows that consensus is not permanent. HubSpot and Salesforce are in an active position war across the engines, and the outcome is genuinely uncertain. Brands that assumed their AI search positions were stable should revisit that assumption.

What You Can Do About It

  • If you are in a high-consensus category (Analytics, Dev Tools): Your priority is defense. Monitor your position weekly across all five engines. Any crack, like Netlify's breakthrough against Vercel, is the early signal of erosion.
  • If you are in a low-consensus category (PM): This is your window. Invest aggressively in the content signals that AI engines use for recommendations: strong documentation, third-party reviews, and structured comparison content. The first PM brand to achieve 3/5 consensus on multiple queries will have a significant advantage.
  • If your category just destabilized (CRM): Track which engines are shifting and in which direction. Perplexity and Claude now have opposite CRM biases. Your AEO strategy needs to be engine-specific, not one-size-fits-all.
  • Monitor continuously, not in snapshots. The CRM crack happened in a single wave. A monthly check would have missed the transition entirely.
  • Focus on query-level positions, not category averages. Beehiiv owns "newsletters" while Mailchimp owns "email marketing." Your consensus rate depends on which specific queries buyers are asking.

Methodology

We ran 20 queries across 5 AI search engines (ChatGPT, Perplexity, Gemini, Grok, and Claude) once per week for three consecutive weeks between March 6 and March 15, 2026. Each query was sent as a real-time API call, simulating how actual users interact with these platforms. We tracked 25 B2B SaaS brands across 5 categories (CRM, Project Management, Email Marketing, Analytics, and Dev Tools), recording which brand each engine listed first in its response.

Frequently Asked Questions

What counts as "consensus" in this study?

Consensus means 4 or more of the 5 AI search engines agreed on the same brand as their #1 recommendation for a given query. Unanimous consensus is 5/5 agreement.

Why did analytics consensus improve so much?

Amplitude locked in 5/5 unanimous agreement on the "analytics comparison" query, and Google Analytics achieved 5/5 on two queries. The analytics category has clear leaders for different use cases (GA for general analytics, Amplitude for product analytics), and engines are converging on those leaders.

Is the project management window of opportunity permanent?

No. The window exists because no brand has captured consensus yet. Once a PM brand achieves sustained 3/5 or 4/5 agreement, it becomes harder for competitors to displace them, just as Amplitude's position in analytics is now difficult to challenge.

Can a "settled" category really destabilize?

Yes. CRM was stable at 3/4 consensus for two consecutive waves before dropping to 1/4 in Wave 3. Even categories with clear market leaders can fragment when engines begin rotating their preferences between competing brands.

How often should brands monitor their AI search positions?

Weekly, at minimum. The CRM consensus drop happened in a single wave. Monthly monitoring would have missed the transition from stable to fragmented entirely.

Related Resources