There is a person right now, somewhere, integrating your API. They're not on your documentation site. They haven't opened your getting-started guide. They have never seen your interactive playground or your carefully designed sidebar navigation.
They're sitting in Claude. Or Copilot. Or Cursor. They typed something like "integrate the Stripe billing API with my Next.js app using the app router" and waited for working code to come back. The AI read your docs on their behalf. It found the relevant endpoints, understood the authentication flow, picked the right SDK methods, and produced an implementation.
Two weeks ago at Start Summit Hackathon in St. Gallen, I watched this happen in real time. I was talking with a group of CS students and a couple of early-stage startup founders about how they approach new APIs, and every single one of them described the same workflow: paste the problem into an AI, get code back, iterate from there. One of the students laughed when I asked if she'd read the docs. "Why would I? Claude reads them for me."
The person never visited your site. They may never visit your site. And this is increasingly just how software gets built.
The core shift
Documentation now has two fundamentally different consumers: humans who read it and AI assistants that read it on behalf of builders. Most documentation is optimised exclusively for humans. The AI is already the dominant reader.
This changes everything downstream:
- Freshness is now a reliability issue. When an AI serves stale content, the builder has no way to detect the problem. The damage scales silently.
- "Developer" is too narrow a word. Product managers, designers, and analysts are shipping software through AI assistants, often without ever reading a line of documentation themselves.
- Machine-readable structure matters more than visual design. Clean markdown, self-contained blocks, and explicit metadata are what allow AI to represent your product accurately.
- Format requirements have split. Human readers need narrative. AI intermediaries need structured, parseable specs. You need to serve both.
The rest of this post unpacks how we got here, what this means for DevRel, and what you can do about it right now.
The journey nobody planned for
For a long time, developer relations followed a well-understood path. You wrote comprehensive documentation. You published quickstart guides. You gave conference talks. You maintained a presence on Stack Overflow. You made your API reference searchable, your SDKs idiomatic, your error messages helpful.
That path assumed the developer would read your content. Navigate your structure. Follow your steps.
GitHub's 2024 developer survey found that 97% of enterprise developers have used AI coding tools at some point. Stack Overflow's annual survey showed 76% of all developers are using or planning to use AI tools, with 62% of professionals actively using them day to day. By 2026, that number climbed to 84%, with 41% of all code now AI-generated and 51% of professional developers using AI tools daily. Those numbers aren't slowing down.
The new journey looks different. Someone describes what they want in natural language. An AI assistant reads the documentation, finds the relevant sections, and generates the integration. The builder reviews the output, maybe refines the prompt, maybe asks a follow-up. Minutes, not hours.
The getting-started funnel that DevRel teams spent years perfecting? It's being bypassed. Not because it was bad. The entry point just moved.
Two consumers, one set of docs
Documentation now has two fundamentally different audiences.
The first is the human reader. This person still exists. They show up for architecture decisions, edge case debugging, compliance review, and conceptual understanding. They want narrative explanations, well-organised reference material, and clear reasoning about trade-offs.
The second is the AI intermediary. It reads your documentation on behalf of a builder. It does not care about your sidebar. It does not appreciate your visual design. It needs structured, machine-parseable content: clean markdown, consistent formatting, explicit specifications it can reason about without ambiguity.
Almost every documentation site today is optimised exclusively for the first audience. The second audience is already the dominant consumer.
Jeremy Howard identified this tension when he proposed the /llms.txt standard in 2024. His observation was precise: "Large language models increasingly rely on website information, but face a critical limitation: context windows are too small to handle most websites in their entirety." The proposal is simple. A curated markdown file at /llms.txt that gives AI models a structured overview of your product and links to the most important resources. FastHTML, Anthropic's own docs, and a growing directory of projects now ship one.
It is a useful convention. But it is also a symptom of a deeper problem. The real issue is not format. It is that most documentation was never designed with machine consumption in mind.
The builder is not cutting corners
There's a temptation to look at the person who prompts Claude instead of reading docs and conclude they're taking shortcuts. That they don't really understand what's happening in the code. That they're somehow a lesser kind of developer.
I've had this conversation enough times now to know that's usually wrong.
Many of these builders are senior engineers making deliberate efficiency choices. They understand the code, they just don't want to navigate four pages of documentation to find the three lines they actually need. They've learned that an AI assistant can extract those lines faster than they can scan for them, so they delegate the reading. (Honestly, I do this myself. I can't remember the last time I read a getting-started guide top to bottom.)
Anthropic recognised this pattern when they built the Model Context Protocol. MCP is now supported by Claude, ChatGPT, VS Code, Cursor, and others. It's explicitly designed so AI assistants can reach into external systems, pull context, and act on it. The specification describes it as providing "access to an ecosystem of data sources, tools and apps which will enhance capabilities and improve the end-user experience."
Read that carefully. It's infrastructure language, not convenience language. The builders using these tools aren't avoiding work. They're working through a new layer, and your documentation is part of that layer whether you designed it to be or not.
The numbers back this up. Claude alone now handles 25 billion API calls per month, with 30 million monthly active users across 159 countries. 70% of Fortune 100 companies use Claude. According to a Menlo Ventures survey, Anthropic holds 32% of enterprise AI market share by model usage, ahead of OpenAI at 25%. An HSBC research report puts that even higher: 40% by total AI spending. These aren't experimental tools. They're primary infrastructure.
Developer relations was built for a different era
If your DevRel strategy was designed before 2023, it was designed for a world where developers read docs directly. That world hasn't disappeared, but it's no longer the dominant interaction pattern for a growing share of builders.
This changes the calculus on several long-standing DevRel activities.
Conference talks. A 45-minute presentation at a developer conference reaches a room of a few hundred people. A well-structured /llms.txt file and clean machine-readable documentation reach every builder who asks any AI assistant about your product, continuously, at any time. The talk is a one-time event. The machine-readable docs compound. I'm not saying conferences are worthless (I literally just came back from one), but the leverage equation has shifted.
Getting-started guides. The classic five-step quickstart tutorial is increasingly a formality. The builder doesn't follow steps. They describe what they want and expect the AI to produce the integration. If the API is well-documented in a machine-friendly format, the AI handles the getting-started experience more efficiently than any tutorial could. What tutorials should become instead is conceptual material: explaining why you'd choose approach A over approach B. The AI can generate the implementation. It's much less reliable at explaining the trade-offs.
Stack Overflow. Their own survey data showed that 84% of developers use technical documentation directly, with 90% of those relying on docs within API and SDK packages. But the way they access those docs is increasingly through an AI layer, not a browser tab. The questions that still reach Stack Overflow tend to be the hard ones. Edge cases, production debugging, things that require nuance. Valuable, sure. But no longer where the volume is.
When the AI reads your docs, freshness becomes critical
Here is the part that most teams have not thought through.
When a human reads a documentation page, they can apply judgement. They might notice the screenshots look old, or that a comment at the bottom says the process changed. They can squint at it and think "this feels outdated."
An AI assistant can't do any of that. It reads the text, processes it as fact, and generates an answer with full confidence. If the documentation describes a deprecated endpoint, the AI will cheerfully recommend integrating with it. If the documentation references infrastructure that was replaced six months ago, the AI will describe the old setup as current. No hesitation.
And here's the thing that makes this worse than it sounds: 66% of developers already say the biggest problem with AI tools is that they give results that are "almost right but not quite." Stale documentation feeds directly into that problem. The AI isn't hallucinating. It's faithfully reproducing outdated content, and there's no way for the builder to tell the difference.
The builder trusts the AI. The AI trusts the documentation. If the documentation is stale, that trust chain delivers a confidently wrong answer.
This was always a problem, obviously. Stale content has always confused people. But the damage was contained because human readers could sometimes catch it. AI intermediaries can't. They amplify stale content by serving it at scale, with authority, to people who have no reason to doubt it.
Freshness isn't a content quality issue anymore. It's a reliability issue for every AI-powered workflow that touches your docs.
The word "developer" is too narrow
The people building software in 2026 don't all identify as developers. Some are designers who prompt Claude to build a working prototype. Some are product managers who use Cursor to ship internal tools. Some are data analysts who describe a data pipeline in natural language and let an agent assemble it. At Start Summit, half the hackathon teams had members with zero programming background who were shipping working software by the end of the weekend.
Ramp is a useful example. The fintech company went from a $5.8B valuation in 2023 to $32B by late 2025, crossing $1B in annualised revenue along the way. One of the fastest-growing startups in history. A widely discussed part of their approach: product managers building features directly with AI tools instead of waiting in an engineering backlog. PMs at Ramp do not just write specs. They ship code. The AI handles the implementation. The PM handles the intent.
Not a shortcut. A new operating model, and it's working at a scale that makes it really hard to dismiss as an experiment.
Anthropic's own internal study is revealing here. When they surveyed 132 of their own engineers about how they use Claude, the engineers reported using it for about 60% of their work tasks. The most common uses? Debugging existing code, understanding what parts of the codebase were doing, and implementing new features. The engineers said they tend to hand Claude tasks that are "not complex, repetitive, where code quality isn't critical." And 27% of the work they now do with Claude simply wouldn't have been done at all before.
That's Anthropic's own team. The people who built the model are using it as a documentation reader, a codebase navigator, and a first-draft generator. Everyone else is doing the same, just with your docs instead of theirs.
Anthropic has been deliberate about calling this the "builder" persona. Their tools are designed not just for professional software engineers but for anyone who can describe what they want to build. When Claude can scaffold a full-stack application from a Figma design via MCP, the traditional line between "developer" and "non-developer" dissolves.
This has real implications for anyone who maintains documentation or cares about developer experience. Your audience is no longer limited to people who know what a REST endpoint is. It includes anyone whose AI assistant might interact with your product. The PM at Ramp who ships a feature using your API? Probably never reading your documentation directly. Their AI agent absolutely will.
What this means for documentation
If documentation now serves two audiences, human readers and AI intermediaries, it needs to work for both. Sounds obvious. In practice, almost nobody does it.
Here's what I think actually matters:
Machine-readable formats alongside human-readable ones. If your API docs are a beautifully rendered HTML page that an LLM has to scrape and parse, the AI is working harder than it should. Ship the raw OpenAPI spec alongside the rendered version. Ship clean markdown. Make the specifications accessible without requiring the AI to interpret page layout.
Block-level structure instead of page-level narrative. AI assistants do not consume documentation page by page. They extract relevant sections. A document with clear headings, self-contained paragraphs, and explicit block-level semantics is dramatically more useful to an AI than a flowing narrative that requires reading the entire page for context.
Trust signals that machines can read. When was this document last reviewed? Is this still current? Has the content been flagged? These signals need to exist in a form the AI can access, not just as visual cues on a web page. A freshness score, an expiry status, a review date, these are the metadata that allow an AI to decide whether a document is safe to use as a source.
Freshness as a prerequisite, not a feature. When an AI assistant serves a builder a confident answer based on a deprecated endpoint, the damage is worse than a 404. The builder builds on it. Ships it. Then it breaks in production, and nobody knows why until someone traces it back to documentation that should have been updated months ago. Every document that an AI might reference needs a mechanism to prove it's still current. (This is, full disclosure, exactly the problem we're building Rasepi to solve. Forced expiry on documentation blocks so stale content can't hide.)
Getting started: audit your current docs
If you've read this far and you're thinking "okay, but what do I actually do on Monday," here are four concrete things you can check this week.
1. Test your docs through an AI. Open Claude or ChatGPT and ask it to integrate your product in a realistic scenario. Don't use your internal knowledge. Just look at what the AI produces. Is it correct? Is it current? Is it using the right endpoints, the right SDK version, the right auth flow? If the AI gets it wrong, that's what builders are getting right now.
2. Check for stale content. Pick your five most-visited documentation pages and ask: when was this last reviewed? Does it still describe the current state of the product? If you can't answer that confidently, neither can an AI. This is the single highest-leverage fix for most teams.
3. Ship machine-readable formats. If you don't have a /llms.txt file, create one. If your API reference is only available as rendered HTML, export the raw OpenAPI spec and make it accessible. If your docs are in a CMS that doesn't output clean markdown, that's a problem worth solving now.
4. Add review dates and freshness metadata. Even something simple, a last-reviewed field in your content management system, a mandatory review cycle for high-traffic pages. This gives both humans and AI a signal about whether content is trustworthy. Tools like Rasepi can automate this with forced expiry at the block level, but even a manual process is better than nothing.
The quiet shift in how products are represented
There is a broader consequence of all this that is worth stating directly.
Your documentation is no longer just a reference manual for developers. It's the source material that AI assistants use to represent your product to the world. When a builder asks Claude how to use your product, Claude's answer is shaped by whatever it can find and parse from your docs.
Good docs, good answer. Outdated, ambiguous, locked inside HTML that's hard for a model to parse? Worse answer, or an incorrect one. Simple as that.
The quality of the AI's answer about your product is now a direct proxy for your developer experience. Most companies aren't treating it that way yet.
The teams that are ahead on this, Stripe, Vercel, Cloudflare, Anthropic themselves, treat AI readability as a first-class concern. A foundational requirement that shapes how documentation gets written, structured, and maintained. Not a backlog item for next quarter.
The builder sitting in Claude right now, describing what they want to build, expecting working code in minutes. They may never visit a documentation site again. But the AI that serves them will. Constantly.
That AI is now your most frequent reader. The question is whether your docs are ready for it.
The best developer experience strategy in 2026 is not a conference talk or a quickstart guide. It is making sure the AI gets it right.
This post references publicly available research and product documentation. Statistics are drawn from GitHub's 2024 developer survey, the Stack Overflow 2024 Developer Survey, Index.dev's 2026 developer productivity report, Incremys Claude statistics, and Fortune's reporting on Anthropic. The /llms.txt specification is maintained at llmstxt.org. The Model Context Protocol is documented at modelcontextprotocol.io.