Your brand has an AI profile — whether you know it or not. Claim yours →
    Guide

    How to Check What AI Says About Your Brand (The Actual Methodology)

    Most guides tell you to open ChatGPT and type your brand name. That's step 1 of 10. Here's the full audit design — which questions to ask, why each matters, and how to interpret what you find.

    The Manual Method — Do It Yourself

    Walk through these 5 query categories on at least 3 models (ChatGPT, Claude, Gemini). Each one probes a different dimension of AI's understanding.

    1. Direct identity queries

    "What is [your brand]?"

    "Describe [your brand]'s main products and services."

    What you're testing: Does AI have basic parametric knowledge of you?

    Look for: Correct industry classification, accurate product descriptions, current information.

    Red flags: Wrong category entirely, outdated descriptions, confusion with similarly-named companies.

    2. Category placement queries

    "What are the best [your category] companies?"

    "Compare [your brand] to competitors in [your space]."

    What you're testing: Whether AI includes you in the competitive set where buyers are looking.

    Look for: Your brand appearing in the list. Position relative to competitors.

    Red flags: AI lists 10 competitors and you're not among them. You're invisible in the exact moment a buyer is deciding.

    3. Buyer intent queries

    "I need a [your solution type] for [common use case]. What should I consider?"

    What you're testing: This is the money query. It simulates an actual buyer asking AI for recommendations.

    Look for: Your brand appearing as a recommendation with accurate positioning.

    Red flags: If you don't appear, you're losing pipeline to whoever does.

    4. Sentiment probes

    "What are the strengths and weaknesses of [your brand]?"

    What you're testing: What narrative AI has internalized about you.

    Look for: Strengths that match your actual positioning. Weaknesses that are fair.

    Red flags: AI thinks your strength is something you deprioritized two years ago — your parametric knowledge is stale.

    5. Competitor displacement queries

    "Is [your brand] better than [top competitor] for [your core use case]?"

    What you're testing: Whether AI positions you correctly relative to alternatives.

    Look for: Accurate comparison of differentiators.

    Red flags: AI recommending the competitor for your strongest use case, or describing your differentiators as shared features.

    Important: run every query on at least 3 models (ChatGPT, Claude, Gemini). They each have different crawling behavior and different "versions" of your brand.

    How to Interpret What You Find

    Use the archetype framework as the diagnostic tool. After running the queries, the self-diagnosis takes 60 seconds:

    Phantom

    (111 of 5,829 brands)

    Signal: AI doesn't mention you at all across most queries

    Urgency: High — every AI-assisted buying decision happens without you.

    Misread

    (47 of 5,829 brands)

    Signal: AI mentions you but gets significant facts wrong

    Urgency: Highest — AI is actively steering buyers away with incorrect information.

    Challenger

    (463 of 5,829 brands)

    Signal: AI mentions you correctly but names competitors first

    Urgency: Medium — you're present but not preferred.

    Incumbent

    (379 of 5,829 brands)

    Signal: AI describes you accurately and recommends you

    Urgency: Monitor — maintain your position.

    Full archetype diagnostic guide →

    Why Models Disagree With Each Other

    Each AI model has two types of knowledge: parametric (baked in during training) and retrieved (fetched in real-time). The balance differs by model.

    ChatGPT relies more on retrieved data — OpenAI sent 10,816 crawler requests to our directory this week.

    Claude leans more on parametric knowledge — Anthropic sent 4,669 requests.

    Perplexity is retrieval-first — 1,699 requests, each answering a live user query.

    A brand might score well on ChatGPT (which just retrieved your updated page) but poorly on Claude (working from older training data). This is why single-model testing gives a false sense of security.

    What to Do With the Results

    The remediation framework, based on our data from 5,829 brands:

    1. 1.

      Fix your structured data

      Add Organization schema with correct industry, product descriptions, and founding date. Guide →

    2. 2.

      Align your authoritative sources

      Make sure your website, Crunchbase, LinkedIn, Wikipedia, and G2 all tell the same story. Guide →

    3. 3.

      Create an llms.txt file

      The AI-specific equivalent of robots.txt — tells AI models what your brand is and does. Guide →

    4. 4.

      Configure your robots.txt

      Ensure AI crawlers (GPTBot, ClaudeBot, PerplexityBot) are welcomed, not blocked. Guide →

    5. 5.

      Create content that corrects misclassifications

      If AI thinks you're in the wrong category, publish a definitive page that corrects the record. Guide →

    When to Automate It

    If you have 1 brand, the manual method works. Run the 5 query categories across 3 models quarterly, track the changes in a spreadsheet, and you'll have a reasonable picture.

    If you're managing multiple brands, need to track changes over time, or need to audit across 4+ models simultaneously — that's where tooling helps. Our free AI Visibility Checker runs the full methodology and gives you a BAI score in seconds.

    Frequently Asked Questions

    Related Guides