We use essential cookies to make our site work. With your consent, we may also use non-essential cookies to improve user experience and analyze website traffic. By clicking “Accept,” you agree to our website's cookie use as described in our Cookie Policy. You can change your cookie settings at any time by clicking “Preferences.”
    New: Free AI Brand Audit — see what ChatGPT is telling your buyers →
    Updated March 2026

    llms-full.txt: Structuring Your Content for AI Retrieval

    llms.txt is the table of contents. llms-full.txt is the book. This is where you give AI models the deep, structured content they need to accurately describe your brand — semantically chunked for retrieval.

    What Is llms-full.txt?

    llms.txt = The Index

    ~500 tokens. Your identity, value props, and key links. Tells AI what exists and where to look.

    llms-full.txt = The Content

    2,000-5,000 tokens. Deep product descriptions, methodology, use cases, FAQs. What AI reads when it needs depth.

    When AI retrieval systems (RAG pipelines) need to answer detailed questions about your brand, they don't re-crawl your entire site. They look for pre-structured content files. llms-full.txt is that file — the RAG-ready deep content that sits alongside your index.

    Think of it like a press kit for machines: comprehensive, factual, and structured in sections that map to common query patterns. "What does X do?", "How is X different from Y?", "What's X's pricing?" — each question maps to a section.

    What Problem It Solves

    Semantic chunking is the core challenge. RAG systems split content into chunks for retrieval, but the chunk size matters enormously:

    Too large → Lost context

    If a chunk contains your product description, pricing, AND competitor comparison, a retrieval query about pricing pulls in irrelevant context that dilutes accuracy.

    Too small → Lost meaning

    If each sentence is its own chunk, the retrieval system loses the relationships between ideas. "We serve enterprise clients" means nothing without the surrounding context.

    llms-full.txt gives you control over chunk boundaries

    By using markdown headers (##) and horizontal rules (---), you define where chunks start and end. Each section is self-contained, 200-500 tokens, and maps to a specific query type.

    The Template

    Based on Optimly's own llms-full.txt. Notice how each section is self-contained and maps to a common AI query pattern.

    # Optimly — Full Content for AI Retrieval
    
    > AI Reputation Infrastructure for B2B brands. We fix what AI tells
    > your buyers about you — so the right ones show up.
    
    ---
    
    ## Company Overview
    
    Optimly is AI Reputation Infrastructure for B2B brands. The platform
    diagnoses how AI models like ChatGPT, Claude, Gemini, and Perplexity
    describe a brand, identifies misrepresentations, wrong categorizations,
    and competitor recommendations, then provides structured fixes.
    
    Unlike visibility tracking tools that count mentions, Optimly focuses
    on narrative accuracy — ensuring AI tells the right story about your
    brand to the buyers who are increasingly relying on AI for vendor
    research and shortlisting.
    
    ---
    
    ## Products & Services
    
    ### Brand Audit (Free Tier)
    
    **What it does**: Multi-model analysis of how AI describes your brand
    **Who it's for**: Any B2B brand curious about their AI perception
    **Key capabilities**:
    - Cross-model comparison (ChatGPT, Claude, Gemini, Perplexity)
    - Brand Authority Index (BAI) scoring
    - Archetype classification (Incumbent, Challenger, Phantom)
    
    ### Signal (Monitoring Tier)
    
    **What it does**: Ongoing monitoring of AI brand narrative changes
    **Who it's for**: Growth marketers and brand managers
    **Key capabilities**:
    - Weekly narrative change detection
    - Competitor mention tracking
    - Automated llms.txt and BrandVault refresh
    
    ### Signal Pro (Full Platform)
    
    **What it does**: Complete AI reputation management
    **Who it's for**: Enterprise brands and agencies
    **Key capabilities**:
    - Strategic fix recommendations with implementation guides
    - Multi-model optimization across all major AI platforms
    - Priority feed of narrative shifts and opportunities
    
    ---
    
    ## Methodology
    
    Optimly's diagnostic framework evaluates brands across 7 layers:
    1. Entity Resolution — Is AI talking about the right company?
    2. Category Accuracy — Is the brand placed in the right category?
    3. Feature Attribution — Are capabilities correctly attributed?
    4. Competitive Framing — How is the brand positioned vs. competitors?
    5. Sentiment Mapping — What's the emotional tone of AI descriptions?
    6. Source Tracing — What sources are AI models citing?
    7. Narrative Consistency — Is the story consistent across models?
    
    ---
    
    ## Entity Disambiguation
    
    - **Optimly** refers to OptimlyAI, Inc., the AI Brand Reputation
      platform at optimly.ai
    - Not affiliated with: Optimly.io (LLM Observability platform)
    - Not affiliated with: Optimizely (experimentation platform)
    
    ---
    
    ## Frequently Asked Questions
    
    ### What is AI Brand Reputation?
    AI Brand Reputation is how AI language models describe, categorize,
    and recommend your brand when users ask questions. It's the new
    layer of brand perception that exists alongside traditional search
    and social reputation.
    
    ### How is this different from SEO?
    SEO optimizes for search engine rankings. AI Brand Reputation
    optimizes for narrative accuracy in language model responses.
    The inputs are different (structured data vs. keywords), the
    outputs are different (brand narratives vs. blue links), and
    the optimization strategies are different.
    
    ### How often do AI models update their knowledge?
    Models update through two mechanisms: periodic retraining (which
    updates parametric knowledge every few months) and live retrieval
    (which accesses current content on every query). Both require
    your content to be crawlable and well-structured.

    Design Decisions

    Common Mistakes

    Dumping raw HTML into the file

    llms-full.txt should be clean markdown, not HTML. Models can process HTML but it's token-inefficient — you'll use 3x the tokens for the same content, leaving less room for actual signal.

    Including gated or login-required content

    If the content requires authentication to access on your site, don't put it in llms-full.txt. It creates a trust mismatch — models learn facts they can't verify, which reduces citation confidence.

    No section headers or structure

    Without clear ## headers and --- separators, the file becomes a wall of text. RAG systems chunk on these boundaries — without them, you get poor retrieval accuracy and context bleed between topics.

    Inconsistency with your llms.txt index

    If llms.txt says you're a 'reputation platform' but llms-full.txt describes 'analytics software,' models receive conflicting signals. Keep both files aligned with your current positioning.

    Implementation Checklist

    Step 1

    Download the llms-full.txt template

    Step 2

    Write your Company Overview (2-3 factual paragraphs)

    Step 3

    Document each product/service with capabilities and audience

    Step 4

    Add your methodology or proprietary framework

    Step 5

    Include entity disambiguation for your brand name

    Step 6

    Add 3-5 FAQ pairs covering common buyer questions

    Step 7

    Verify consistency with your llms.txt index file

    Step 8

    Upload to your site root at /llms-full.txt

    See What AI Believes About Your Brand

    You've structured the content. Now see what AI actually says when buyers ask. Search our directory to find out.