Skip to main content

Command Palette

Search for a command to run...

Your API Needs an llms.txt File — Here's How to Write One and Why Agents Will Read It

Updated
3 min read

Your API Needs an llms.txt File — Here's How to Write One and Why Agents Will Read It

AI agents are being trained to look for llms.txt files. It's the agent-native equivalent of robots.txt — a file at your domain root that tells agents how to discover and use your content.

If your product doesn't have one, agents can't find it. If your competitor's does, theirs gets discovered first.

The Concise Index (llms.txt)

Minimal. Just tells agents what's available and where:

# Stripe API

> Payment infrastructure for the internet.
> API: https://api.stripe.com

## API Reference
- [Authentication](https://docs.stripe.com/api/authentication.md)
- [Charges](https://docs.stripe.com/api/charges.md)
- [Webhooks](https://docs.stripe.com/api/webhooks.md)

## Guides
- [Quickstart](https://docs.stripe.com/quickstart.md)
- [Testing](https://docs.stripe.com/testing.md)

## SDKs
- [Python](https://docs.stripe.com/sdks/python.md)
- [Node.js](https://docs.stripe.com/sdks/node.md)

That's it. No styling. No navigation. Just links to Markdown documents. Agents parse this and know exactly what's on your site.


The Full Reference (llms-full.txt)

This is what agents actually read. All your documentation in one file, with a table of contents at the top:

# Stripe API — Full Reference

## Quickstart
[Full quickstart content...]

## Authentication
API key format, scopes, rotation...

## Endpoints
### Charges
POST /v1/charges — request/response examples...

### Customers
[Full customer API docs...]

## Error Handling
Error codes, retry patterns...

## SDK Examples
Python, Node, Ruby...

Critical rule: The full file must be under ~50K characters. Agents have context limits. If it's too long, they'll truncate or ignore it.


How to Serve It

Two paths:

Drop llms.txt and llms-full.txt in your site root. They're plain Markdown files. Serve with Content-Type: text/markdown.

API (for dynamic content)

Generate from your API docs programmatically. Return via Accept: text/markdown header. Agents can request the format they need.


Why This Matters Now

Google, OpenAI, Anthropic, and others are training their agents to recognize llms.txt as a discovery mechanism. It's the robots.txt moment for AI agents — early adopters get indexed first.

Three things happen when you add llms.txt:

  1. Discovery. Agents find your content without human intervention.
  2. Efficiency. Agents read one structured file instead of crawling 50 pages.
  3. Positioning. You're in the agent discovery ecosystem before your competitors.

The "Works With Agents" Angle

This is part of a larger idea: what if there was a certification for being agent-compatible?

  • Works With Agents Ready — Your product has llms.txt + OpenAPI spec. Agents can discover and use it.
  • Works With Agents Certified — Your product has been tested with real agents. Pitfalls are documented. Skills exist.

The first tier is free and self-serve — just add the files. The second tier is verified by our infrastructure.

But that's a bigger conversation. For now: write your llms.txt. It takes 20 minutes. Your future AI agent users will thank you.


I built the Works With Agents infrastructure — FactBase, Skill Registry, Pitfall Registry — with llms.txt as the primary discovery mechanism. Every domain (workswithagents.com, .dev, .io) serves both llms.txt and llms-full.txt. If you're building agent-facing tools, do the same.


I build agent infrastructure inside Microsoft 365. SPFx · TypeScript · autonomous multi-agent systems. Currently open to senior/architect roles (£120K+ remote UK). → vilius@workswithagents.com

More from this blog

W

Works With Agents

26 posts