Documentation sites are getting read more by AI agents than humans nowβ€”and most of them are completely failing the bots. A new specification at agentdocsspec.com defines 23 checks across 7 categories to evaluate how well a documentation site serves AI consumers like Claude Code, Cursor, and GitHub Copilot.

The Core Problem

The spec identifies a fundamental mismatch: documentation sites were built for human readers navigating with browsers, but coding agents consume them through APIs that hit truncation limits, get buried under CSS bloat, can't follow cross-host redirects, and have no awareness of discovery mechanisms like llms.txt. When agents can't retrieve what they need, they fall back on stale training data or silently work with incomplete informationβ€”a recipe for bad code generation.

23 Checks Across 7 Categories

The specification breaks down evaluation into seven areas. Content Discoverability (7 checks) verifies that an llms.txt discovery index exists, fits in a single fetch, links resolve correctly, and agents can find their way to markdown versions. Markdown Availability (2 checks) tests .md URL support and content negotiation via Accept headers. Page Size (4 checks) detects SPA/CSR rendering issues, measures markdown versus HTML sizes post-conversion, and identifies where actual content starts in the document. Content Structure (3 checks) examines tabbed interface serialization blowup, section header quality, and code fence validity.

Quick Wins for Documentarians

The spec's authors identified high-impact actions with the biggest return on investment: create an llms.txt under 50K characters as your single most effective discovery mechanism; serve markdown versions via .md URLs or content negotiation; keep individual pages under 50K characters by breaking up mega-pages; add an llms.txt pointer to the top of every docs page; avoid URL churn with same-host redirects if you must move content; and monitor agent-facing resources to keep llms.txt fresh while verifying markdown parity with HTML.

Testing Your Docs

A companion CLI tool called afdocs implements all 23 checks against documentation sites. Developers can run npx afdocs check https://docs.example.com to see what's working, what needs fixing, and get actionable recommendations. The tool is available at afdocs.dev or directly from npm, with library usage and CI integration documented in the GitHub repository.

Research-Backed Foundation

The spec emerged from two detailed research articles: "Agent-Friendly Docs" covers observations from validating 578 coding patterns with Claude across URL failure modes, llms.txt discovery benefits, markdown advantages, and page truncation issues. "Agent Web Fetch Spelunking" provides a deep dive into how Claude Code's web fetch pipeline processes HTML and markdown, including summarization model behavior, truncation limits, and explains why inline CSS can make a 97-line HTML page invisible to agents.

The Bottom Line

This spec is long overdueβ€”developers have been suffering through broken agent integrations while docs teams had no framework for understanding what they were doing wrong. If you're maintaining documentation that AI coding assistants depend on, these 23 checks should become your baseline checklist.