We use essential cookies to make our site work. With your consent, we may also use non-essential cookies to improve user experience and analyze website traffic. By clicking “Accept,” you agree to our website's cookie use as described in our Cookie Policy. You can change your cookie settings at any time by clicking “Preferences.”
    Your brand has an AI profile — whether you know it or not. Claim yours →
    Data Analysis

    What We Learned Tracking 8,008 Brand Score Changes in One Week

    AI brand perception isn't a snapshot. It's a moving target. This week, we tracked 8,008 BAI score changes across our 5,829-brand directory. Here's what the data reveals about how AI brand perception actually works.

    The Numbers

    8,008

    total score changes

    74

    positive (0.9%)

    464

    negative (5.8%)

    462

    neutral (5.8%)

    The 6:1 negative-to-positive ratio is the most significant finding. For every brand that gained ground this week, six lost it. Most brands that experienced a score change moved in the wrong direction.

    87.5% of brands (7,008) had unchanged scores this week. Among those that did change, the skew toward negative is clear and consistent with our volatility analysis.

    Why the Default State Is Decay

    The 464 negative vs. 74 positive split isn't a one-week anomaly. It reflects a structural reality: AI brand perception degrades naturally over time. Think of it as information entropy.

    Your brand's AI representation was built from a specific set of training data at a specific point in time. Every day that passes:

    • • Your competitors publish new content that AI indexes
    • • AI models incorporate updated competitive landscapes
    • • Your own content ages — what was current 6 months ago may not reflect your latest product
    • • Third-party sources about you may become outdated or get edited

    Without active maintenance, the accuracy of AI's understanding of your brand decays. Not because anyone is attacking you, but because the information ecosystem moves on and your brand representation doesn't update itself.

    What Drives Volatility Spikes

    Week 12

    724

    total deltas

    Week 13

    276

    total deltas

    The 2.6x week-over-week variation suggests that volatility isn't constant — it spikes and subsides. We hypothesize this maps to model update cycles. When a major model incorporates new training data, hundreds of brand representations shift simultaneously.

    The W12 spike corresponds to a period when we observed increased crawl activity from GPTBot — suggesting OpenAI was incorporating new training data. The following week's lower volatility suggests the update settled.

    Correlations Worth Watching

    Caveats: these are based on one dataset and we're continuing to study them.

    • Higher BAI scores correlate with lower volatility. Brands scoring 80+ tend to have smaller week-over-week changes. Strong, consistent signals across many authoritative sources resist noise.
    • Category matters. SaaS/Cloud Software brands experience higher volatility than Retail/E-commerce — likely because the SaaS competitive landscape changes faster.
    • Brands with comprehensive structured data are more resilient. When brands have Organization schema, SoftwareApplication schema, and an llms.txt file, their BAI scores are more stable.

    What to Monitor and How Often

    Monthly

    Run a full audit across all models and query categories. Your comprehensive baseline check.

    Weekly

    Track your BAI score for significant changes. A 10+ point swing warrants investigation — something changed in your sources or the model's training data.

    After Any Change

    If you update your website, structured data, or authoritative third-party profiles, check your BAI score within 1–2 weeks to see if the change propagated.