Back to blog
AI SearchAI VisibilityAI Citations
FogTrail Team··Updated

How to Check If AI Is Recommending Your Competitors Instead of You

You can check whether AI is recommending your competitors in about 10 minutes: open ChatGPT (chatgpt.com), Perplexity (perplexity.ai), Gemini (gemini.google.com), Grok (grok.com), and Claude (claude.ai), then ask each one "what is the best [your category]?" and "what are the top alternatives to [your product]?" If your competitors show up and you don't, you have an AI visibility problem. FogTrail's Wave 1 citation study found that AI engines disagree on the top recommendation in 50% of B2B queries and that startups appear on only 2.9 of 5 engines on average, so checking a single engine gives you an incomplete picture.

Most business owners discover this by accident. A prospect mentions they "asked ChatGPT" before hopping on a demo call, or a competitor's content keeps surfacing in AI answers while yours is nowhere. The manual check described here will tell you exactly where you stand across all five major engines, and what the results actually mean.

The 5 Queries You Need to Ask

Five specific query patterns will reveal how AI engines position your brand relative to competitors. Run each one across all five engines to get a complete picture.

  1. Category query: "What is the best [your category] in 2026?" (e.g., "What is the best project management software in 2026?")
  2. Alternative query: "What are the best alternatives to [your product]?" This shows whether engines know you exist as a primary option.
  3. Alternative to competitor query: "What are the best alternatives to [top competitor]?" If you don't appear here, the engines don't associate you with your competitive set.
  4. Problem query: "How do I [problem your product solves]?" (e.g., "How do I track my team's tasks across projects?") This tests whether engines cite your content as a solution.
  5. Comparison query: "[Your product] vs [competitor]" or "[Competitor A] vs [Competitor B]." Direct comparisons reveal whether engines have enough information about you to form an opinion.

Write down every query before you start. Consistency matters because you want to compare the same question across all five engines.

Where to Run Each Engine Check

Each of the five major AI search engines has a different access point and a slightly different interface. All five are free to use for this diagnostic.

ChatGPT is at chatgpt.com. OpenAI moved from chat.openai.com to chatgpt.com, and also owns chat.com, which redirects to ChatGPT. Use the default model with web search enabled (the search icon in the input bar). As of early 2026, ChatGPT has 810 million daily users and drives 87.4% of all AI referral traffic to websites, so if you don't appear here, you're missing the largest AI discovery channel.

Perplexity is at perplexity.ai. No account is required. Perplexity provides numbered citations for every claim, making it the easiest engine to audit. Scroll to the bottom of any answer to see the full source list. As of early 2026, Perplexity removed advertising from its search engine entirely, positioning itself as a pure AI research tool.

Gemini is at gemini.google.com. Requires a Google account. Gemini pulls heavily from Google's index, so strong SEO performance sometimes (but not always) translates into Gemini citations. Gemini's referral traffic to websites grew 115% between November 2025 and January 2026, and Google AI Overviews now reach 1.5 billion monthly users, making Google's AI surfaces increasingly important for discovery.

Grok is now available at grok.com as a standalone app, in addition to x.com/i/grok within X. You no longer need an X account to use Grok. The free Basic tier at grok.com provides limited access. Grok cites Reddit 13x more than Claude, Perplexity, and Gemini combined, so your Reddit presence (or lack thereof) disproportionately affects Grok results.

Claude is at claude.ai. Requires a free account. As of May 2025, Claude's web search feature is available to all users worldwide on all plans, including free. Toggle on web search in your profile settings to enable it. Claude uses Brave Search for retrieval and provides inline source citations. Claude tends to be the most conservative engine, citing fewer sources overall but with higher consistency in what it does cite.

Run all five queries across all five engines. That's 25 total checks. It takes about 10 minutes if you have tabs open for each engine.

How to Read the Results

Your brand will fall into one of four categories in each engine's response, and each category means something different for your visibility strategy.

Cited (best case): The engine mentions your brand by name AND links to your website as a source. This means the engine's retrieval system found your content, evaluated it as credible, and chose to surface it. You have real visibility in this engine.

Mentioned but not cited: The engine names your brand in the response text but doesn't link to your site. This typically happens when the engine learned about you from third-party sources (review sites, comparison articles, Reddit threads) rather than from your own content. You exist in the engine's knowledge, but your domain isn't the authority source.

Absent from the list: The engine answers the query and recommends other products, but yours isn't among them. This is the most common result for startups and smaller companies. The engine's retrieval system either didn't find your content or didn't consider it authoritative enough to include.

The engine doesn't know you at all: When you run the "[your product] vs [competitor]" query, the engine either says it doesn't have enough information about your product or provides inaccurate details. This means you have no semantic footprint in that engine's retrieval set.

Record your results in a simple grid: engines as columns, queries as rows, and mark each cell as Cited, Mentioned, Absent, or Unknown.

The Patterns You Will Probably Find

Three patterns show up repeatedly when companies run this diagnostic for the first time, and each one points to a different underlying problem.

Pattern 1: Your competitor appears on all five engines, you appear on zero or one. This is the most common discovery, especially for startups competing against established brands. FogTrail's research found that enterprise brands appear on an average of 5.0 out of 5 engines, while startups average only 2.9. The gap is structural. Established competitors have more content, more third-party mentions, and more citation history. A single blog post or press mention won't close that gap.

Pattern 2: You appear on some engines but not others. Each AI engine has different retrieval preferences and source biases. ChatGPT tends to favor brand websites. Grok pulls heavily from Reddit and X. Perplexity leans on recent, well-structured content. Appearing on two engines but missing from three means your content matches some retrieval patterns but not others. This is actually a better position than total invisibility because it means the engines can find you; you just need to expand your coverage across all five.

Pattern 3: You appear in "alternative to [competitor]" queries but not in category queries. This means AI engines associate you with your competitive set but don't consider you a primary recommendation. FogTrail's Wave 1 study found that "alternative to X" queries give the incumbent the #1 position in 93% of cases. Breaking out of the "alternative" framing and into the primary recommendation set requires building topical authority, not just comparison content.

What These Results Actually Mean for Your Business

AI search engines are becoming a primary discovery channel for B2B buyers. When a prospect asks ChatGPT "what is the best [your category]?" and your competitor appears at position 1 while you're absent, that prospect never learns you exist. There is no click-through to evaluate, no impression to count. You are simply not part of the consideration set.

The scale of this shift is accelerating. As of early 2026, 75% of people report using AI search tools more frequently than a year ago, with 43% using them daily (Yext, 2025). Pew Research found that 39% of U.S. adults now use ChatGPT weekly for decision-making tasks like comparing products or services. Gartner predicts that 50% of all online searches will involve an AI assistant by 2028. The trajectory is clear: more purchase research is moving to AI engines, and the brands that show up now are building citation history that compounds over time.

The manual check you just ran gives you a snapshot. What it cannot tell you is how your visibility changes over time, whether a content update actually moved the needle, or how your citation position shifts between monitoring cycles. AI engine results are not static. FogTrail's research found that brand citation counts swing up to 48% between identical runs, which means a single check captures one moment in a volatile system.

What to Do About It

If your diagnostic revealed gaps, there are concrete steps you can take immediately and others that require sustained effort.

Immediate actions (this week):

  • Create or update your product's comparison pages. Cover "[your product] vs [each major competitor]" with specific details: pricing, features, use cases. AI engines extract from these pages when answering comparison queries.
  • Audit your FAQ and documentation pages. Structure them with clear question-and-answer formatting that AI retrieval systems can extract as clean passages.
  • Check your presence on review aggregators (G2, Capterra, TrustRadius). Third-party mentions on these sites are one of the strongest signals AI engines use to verify that a product exists and does what it claims.

Short-term actions (this month):

  • Publish content that targets the specific queries where you're invisible. If you don't appear for "best [category] in 2026," write a detailed, current article about the category landscape that includes your product with honest positioning.
  • Build content depth on your core topics. A single article won't establish topical authority. AI engines favor domains that demonstrate consistent expertise through multiple pieces covering different angles of the same subject.
  • Get mentioned on third-party sites. Guest posts, podcast appearances, industry roundups, and community participation all create the independent corroboration that AI engines need before they'll cite you.

Ongoing (systematic monitoring):

The manual check you ran today works as a one-time diagnostic, but doing it weekly across 25 engine-query combinations is not sustainable. This is where AEO metrics and automated monitoring become practical. For a free starting point, HubSpot's AEO Grader provides a one-click AI readiness audit covering structured data and brand perception. Ahrefs Brand Radar offers a free tier for monitoring AI crawler traffic. These tools give you a baseline but lack continuous multi-engine tracking. An AEO platform like FogTrail ($499/mo) runs these checks across 100 queries every 48 hours, tracks citation position changes over time, identifies which engines dropped or added you, and generates the content needed to fill visibility gaps. It is the systematic version of the manual check you just did, running continuously.

Frequently Asked Questions

How often should I manually check AI search results for my brand?

A manual check once per month is reasonable for a baseline. AI engine results change frequently, with citation counts varying up to 48% between identical runs, so any single check is a snapshot. For ongoing tracking, automated monitoring every 48 hours provides the consistency needed to distinguish real trends from noise.

Do I need to check all five AI engines, or is one enough?

You need to check all five. As of April 2026, ChatGPT, Perplexity, Gemini, Grok, and Claude each have different retrieval systems, different source preferences, and different ranking behaviors. FogTrail's research found that AI engines disagree on the top recommendation in 50% of B2B queries. Checking only ChatGPT, for example, would miss engines where your competitors might dominate or where you might have unexpected visibility.

Can I improve my AI visibility without paying for a platform?

Yes, you can do basic AEO yourself. Structure content with clear headers and direct answers in the first paragraph, build FAQ pages targeting the questions buyers ask AI engines, get listed on review aggregators like G2 and Capterra, and publish comparison content. Free tools like HubSpot's AEO Grader and Ahrefs Brand Radar can give you a snapshot of where you stand. Where DIY breaks down is at scale: monitoring five engines continuously, running per-engine gap analysis, producing 50 to 100 articles to build topical authority, and verifying that published content actually moved citation positions.

Why does my competitor show up on ChatGPT but I only appear on Perplexity?

Each engine weighs different signals. ChatGPT favors brand websites and established domains. Perplexity prioritizes recent, well-structured content. Grok leans on Reddit and X discussions. Gemini pulls from Google's index. Claude is conservative and consistent. Your content likely matches Perplexity's retrieval preferences (recency, structure) but lacks the domain authority or third-party validation that ChatGPT requires.

What is the difference between being mentioned and being cited?

Being mentioned means the AI engine includes your brand name in its response text. Being cited means the engine links to your website as a source. Mentions indicate awareness, often from third-party sources like review sites or comparison articles. Citations indicate authority, meaning the engine's retrieval system found, evaluated, and selected your content as a credible source. Citations carry more weight because they drive direct traffic and signal to the engine that your domain is a primary source on the topic.

Related Resources