Walk through these 5 query categories on at least 3 models (ChatGPT, Claude, Gemini). Each one probes a different dimension of AI's understanding.
"What is [your brand]?"
"Describe [your brand]'s main products and services."
What you're testing: Does AI have basic parametric knowledge of you?
Look for: Correct industry classification, accurate product descriptions, current information.
Red flags: Wrong category entirely, outdated descriptions, confusion with similarly-named companies.
"What are the best [your category] companies?"
"Compare [your brand] to competitors in [your space]."
What you're testing: Whether AI includes you in the competitive set where buyers are looking.
Look for: Your brand appearing in the list. Position relative to competitors.
Red flags: AI lists 10 competitors and you're not among them. You're invisible in the exact moment a buyer is deciding.
"I need a [your solution type] for [common use case]. What should I consider?"
What you're testing: This is the money query. It simulates an actual buyer asking AI for recommendations.
Look for: Your brand appearing as a recommendation with accurate positioning.
Red flags: If you don't appear, you're losing pipeline to whoever does.
"What are the strengths and weaknesses of [your brand]?"
What you're testing: What narrative AI has internalized about you.
Look for: Strengths that match your actual positioning. Weaknesses that are fair.
Red flags: AI thinks your strength is something you deprioritized two years ago — your parametric knowledge is stale.
"Is [your brand] better than [top competitor] for [your core use case]?"
What you're testing: Whether AI positions you correctly relative to alternatives.
Look for: Accurate comparison of differentiators.
Red flags: AI recommending the competitor for your strongest use case, or describing your differentiators as shared features.
Important: run every query on at least 3 models (ChatGPT, Claude, Gemini). They each have different crawling behavior and different "versions" of your brand.
Use the archetype framework as the diagnostic tool. After running the queries, the self-diagnosis takes 60 seconds:
Signal: AI doesn't mention you at all across most queries
Urgency: High — every AI-assisted buying decision happens without you.
Signal: AI mentions you but gets significant facts wrong
Urgency: Highest — AI is actively steering buyers away with incorrect information.
Signal: AI mentions you correctly but names competitors first
Urgency: Medium — you're present but not preferred.
Signal: AI describes you accurately and recommends you
Urgency: Monitor — maintain your position.
Each AI model has two types of knowledge: parametric (baked in during training) and retrieved (fetched in real-time). The balance differs by model.
ChatGPT relies more on retrieved data — OpenAI sent 10,816 crawler requests to our directory this week.
Claude leans more on parametric knowledge — Anthropic sent 4,669 requests.
Perplexity is retrieval-first — 1,699 requests, each answering a live user query.
A brand might score well on ChatGPT (which just retrieved your updated page) but poorly on Claude (working from older training data). This is why single-model testing gives a false sense of security.
The remediation framework, based on our data from 5,829 brands:
Fix your structured data
Add Organization schema with correct industry, product descriptions, and founding date. Guide →
Align your authoritative sources
Make sure your website, Crunchbase, LinkedIn, Wikipedia, and G2 all tell the same story. Guide →
Create an llms.txt file
The AI-specific equivalent of robots.txt — tells AI models what your brand is and does. Guide →
Configure your robots.txt
Ensure AI crawlers (GPTBot, ClaudeBot, PerplexityBot) are welcomed, not blocked. Guide →
Create content that corrects misclassifications
If AI thinks you're in the wrong category, publish a definitive page that corrects the record. Guide →
If you have 1 brand, the manual method works. Run the 5 query categories across 3 models quarterly, track the changes in a spreadsheet, and you'll have a reasonable picture.
If you're managing multiple brands, need to track changes over time, or need to audit across 4+ models simultaneously — that's where tooling helps. Our free AI Visibility Checker runs the full methodology and gives you a BAI score in seconds.