← Back to blog

Builders, Not Developers: How Claude Changed Who Your Docs Are For

The person integrating your API no longer reads your docs. They sit in Claude and describe what they want. Developer relations, API documentation, and the whole getting-started funnel need to be rethought for this new reality.

Thinking Out Loud
Builders, Not Developers: How Claude Changed Who Your Docs Are For

There is a person right now, somewhere, integrating your API. They are not on your documentation site. They have not opened your getting-started guide. They have never seen your interactive playground or your carefully designed sidebar navigation.

They are sitting in Claude. Or Copilot. Or Cursor. They typed something like "integrate the Stripe billing API with my Next.js app using the app router" and waited for working code to come back. The AI read your docs on their behalf. It found the relevant endpoints, understood the authentication flow, picked the right SDK methods, and produced an implementation.

The person never visited your site. They may never visit your site. And this is increasingly the normal way software gets built.

The journey nobody planned for

For a long time, developer relations followed a well-understood path. You wrote comprehensive documentation. You published quickstart guides. You gave conference talks. You maintained a presence on Stack Overflow. You made your API reference searchable, your SDKs idiomatic, your error messages helpful.

That path assumed the developer would read your content. Navigate your structure. Follow your steps.

GitHub's 2024 developer survey found that 97% of enterprise developers have used AI coding tools at some point. Stack Overflow's annual survey showed 76% of all developers are using or planning to use AI tools, with 62% of professionals actively using them day to day. Those numbers were already high. They have only climbed since.

The new journey looks different. Someone describes what they want in natural language. An AI assistant reads the documentation, finds the relevant sections, and generates the integration. The builder reviews the output. Maybe they refine the prompt. Maybe they ask a follow-up question. The whole process takes minutes, not hours.

The getting-started funnel that DevRel teams spent years perfecting is being bypassed. Not because it was bad. Because the entry point moved.

Two consumers, one set of docs

Documentation now has two fundamentally different audiences.

The first is the human reader. This person still exists. They show up for architecture decisions, edge case debugging, compliance review, and conceptual understanding. They want narrative explanations, well-organised reference material, and clear reasoning about trade-offs.

The second is the AI intermediary. It reads your documentation on behalf of a builder. It does not care about your sidebar. It does not appreciate your visual design. It needs structured, machine-parseable content: clean markdown, consistent formatting, explicit specifications it can reason about without ambiguity.

Almost every documentation site today is optimised exclusively for the first audience. The second audience is already the dominant consumer.

Jeremy Howard identified this tension when he proposed the /llms.txt standard in 2024. His observation was precise: "Large language models increasingly rely on website information, but face a critical limitation: context windows are too small to handle most websites in their entirety." The proposal is simple. A curated markdown file at /llms.txt that gives AI models a structured overview of your product and links to the most important resources. FastHTML, Anthropic's own docs, and a growing directory of projects now ship one.

It is a useful convention. But it is also a symptom of a deeper problem. The real issue is not format. It is that most documentation was never designed with machine consumption in mind.

The builder is not cutting corners

There is a temptation to look at the person who prompts Claude instead of reading docs and conclude they are taking shortcuts. That they do not really understand what is happening in the code. That they are somehow a lesser kind of developer.

That is usually wrong.

Many of these builders are senior engineers making deliberate efficiency choices. They understand the code. They just do not want to navigate four pages of documentation to find the three lines they actually need. They have learned that an AI assistant can extract those lines faster than they can scan for them, so they delegate the reading.

Anthropic recognised this pattern when they built the Model Context Protocol. MCP is now supported by Claude, ChatGPT, VS Code, Cursor, and others. It is explicitly designed so AI assistants can reach into external systems, pull context, and act on it. The specification describes it as providing "access to an ecosystem of data sources, tools and apps which will enhance capabilities and improve the end-user experience."

That is infrastructure language, not convenience language. The builders using these tools are not avoiding work. They are working through a new layer. Your documentation is part of that layer whether you designed it to be or not.

Developer relations was built for a different era

If a DevRel strategy was designed before 2023, it was designed for a world where the developer read the docs directly. That world has not disappeared, but it is no longer the dominant interaction pattern for a growing share of builders.

This changes the calculus on several long-standing DevRel activities.

Conference talks. A 45-minute presentation at a developer conference reaches a room of a few hundred people. A well-structured /llms.txt file and clean machine-readable documentation reach every builder who asks any AI assistant about your product, continuously, at any time. The talk is a one-time event. The machine-readable docs compound. That does not make conferences worthless. It changes which activities have the highest leverage.

Getting-started guides. The classic five-step quickstart tutorial is increasingly a formality. The builder does not follow steps. They describe what they want and expect the AI to produce the integration. If the API is well-documented in a machine-friendly format, the AI handles the getting-started experience more efficiently than the tutorial. What tutorials should become instead is conceptual material: explaining why you would choose approach A over approach B. The AI can generate the implementation. It is less reliable at explaining the trade-offs.

Stack Overflow. Stack Overflow's own survey data showed that 84% of developers use technical documentation directly, with 90% of those relying on docs within API and SDK packages. But the way they access those docs is increasingly through an AI layer, not a browser tab. The questions that still reach Stack Overflow are the difficult ones, the edge cases and production debugging threads that require nuance. That is valuable. But it is no longer where the volume is.

When the AI reads your docs, freshness becomes critical

Here is the part that most teams have not thought through.

When a human reads a documentation page, they can apply judgement. They might notice the screenshots look old, or that a comment at the bottom says the process changed. They can evaluate context.

An AI assistant cannot do this. It reads the text, processes it as fact, and generates an answer with full confidence. If the documentation describes a deprecated endpoint, the AI will recommend integrating with it. If the documentation references infrastructure that was replaced six months ago, the AI will describe the old setup as current.

The builder trusts the AI. The AI trusts the documentation. If the documentation is stale, that trust chain delivers a confidently wrong answer.

This was always a problem with documentation, of course. Stale content has always confused people. But the damage was limited because human readers could sometimes detect the problem. AI intermediaries cannot. They amplify stale content by serving it at scale, with authority, to people who have no reason to doubt it.

Freshness is no longer a content quality issue. It is a reliability issue for every AI-powered workflow that touches your documentation.

The word "developer" is too narrow

The people building software in 2026 do not all identify as developers. Some are designers who prompt Claude to build a working prototype. Some are product managers who use Cursor to ship internal tools. Some are data analysts who describe a data pipeline in natural language and let an agent assemble it.

Ramp is a useful example. The fintech company went from a $5.8B valuation in 2023 to $32B by late 2025, crossing $1B in annualised revenue along the way. One of the fastest-growing startups in history. A widely discussed part of their approach: product managers building features directly with AI tools instead of waiting in an engineering backlog. PMs at Ramp do not just write specs. They ship code. The AI handles the implementation. The PM handles the intent.

That is not a shortcut. It is a new operating model. And it is working at a scale that makes it hard to dismiss as an experiment.

Anthropic has been deliberate about calling this the "builder" persona. Their tools are designed not just for professional software engineers but for anyone who can describe what they want to build. When Claude can scaffold a full-stack application from a Figma design via MCP, the traditional line between "developer" and "non-developer" dissolves.

This has real implications for anyone who maintains documentation or cares about developer experience. The audience is no longer limited to people who know what a REST endpoint is. It includes anyone whose AI assistant might interact with your product. The PM at Ramp who ships a feature using your API will probably never read your documentation directly. Their AI agent absolutely will.

What this means for documentation

If documentation now serves two audiences, human readers and AI intermediaries, it needs to work for both. That sounds obvious. In practice, almost nobody does it.

A few changes matter:

Machine-readable formats alongside human-readable ones. If your API docs are a beautifully rendered HTML page that an LLM has to scrape and parse, the AI is working harder than it should. Ship the raw OpenAPI spec alongside the rendered version. Ship clean markdown. Make the specifications accessible without requiring the AI to interpret page layout.

Block-level structure instead of page-level narrative. AI assistants do not consume documentation page by page. They extract relevant sections. A document with clear headings, self-contained paragraphs, and explicit block-level semantics is dramatically more useful to an AI than a flowing narrative that requires reading the entire page for context.

Trust signals that machines can read. When was this document last reviewed? Is this still current? Has the content been flagged? These signals need to exist in a form the AI can access, not just as visual cues on a web page. A freshness score, an expiry status, a review date, these are the metadata that allow an AI to decide whether a document is safe to use as a source.

Freshness as a prerequisite, not a feature. When an AI assistant serves a builder a confident answer based on a deprecated endpoint, the damage is worse than a 404. The builder builds on it. Ships it. Then it breaks in production. That sequence happens silently and at scale. Every document that an AI might reference needs a mechanism to prove it is still current.

The quiet shift in how products are represented

There is a broader consequence of all this that is worth stating directly.

Your documentation is no longer just a reference manual for developers. It is the source material that AI assistants use to represent your product to the world. When a builder asks Claude how to use your product, Claude's answer is shaped by whatever it can find and parse from your docs.

If your documentation is well-structured, current, and machine-readable, Claude gives a good answer. If it is outdated, ambiguous, or locked inside HTML that is difficult for a model to parse, Claude gives a worse answer, or an incorrect one.

The quality of the AI's answer about your product is now a direct proxy for your developer experience. And most companies are not treating it that way.

The teams that are ahead on this, Stripe, Vercel, Cloudflare, Anthropic themselves, treat AI readability as a first-class concern. Not something to address later. Not a nice-to-have. A foundational requirement that shapes how documentation is written, structured, and maintained.

The builder sitting in Claude right now, describing what they want to build, expecting working code in minutes, they may never visit a documentation site again. But the AI that serves them will, constantly.

That AI is now your most frequent reader. The question is whether your documentation is ready for it.

The best developer experience strategy in 2026 is not a conference talk or a quickstart guide. It is making sure the AI gets it right.


This post references publicly available research and product documentation. Statistics are drawn from GitHub's 2024 developer survey and the Stack Overflow 2024 Developer Survey. The /llms.txt specification is maintained at llmstxt.org. The Model Context Protocol is documented at modelcontextprotocol.io.

Keep your docs fresh. Automatically.

Rasepi enforces review dates, tracks content health, and publishes to 40+ languages.

Get started for free →