Skip to main content

Command Palette

Search for a command to run...

I Built Infrastructure for 20 AI Agents That Run Themselves — For €4.57/Month

Updated
3 min read

Five months ago, I couldn't get one AI agent to finish a build without breaking. Today, I have 20 autonomous agents running cron jobs, self-improving, and catching each other's bugs — all on a €4.57/month VPS.

Here's what I built, what broke along the way, and why I'm not charging for any of it yet.

The 10 Patterns That Emerged

I didn't plan to build a methodology. I was just trying to make agents work. But over 5 months of breaking and fixing, patterns emerged:

1. Boot — First session setup. AGENTS.md, environment, initial memory. Without this, every agent starts blind.

2. Skills — Reusable procedural knowledge. I now have 153 skills. When an agent needs to build an SPFx web part, it loads the skill — no re-explaining.

3. Memory — Durable facts across sessions. What Python version? Where's the project? Never re-answer these questions.

4. Decision Protocols — When the agent decides vs when it asks. Hours saved from eliminated approval loops.

5. Tool Composition — The right tool for each job. Delegating a coding task to a subagent burns tokens and produces garbage. Use write_file directly.

6. Orchestration — Parallel specialist agents. Research runs while build runs. 3x throughput.

7. Pipelines — Agents that run while you sleep. Cron jobs, builds, monitoring. Silent unless broken.

8. Resilience — Never-stop loops. 11 consecutive builds with zero human intervention. The agent hit errors on 8 of them and recovered from every single one.

9. Verify — Trust but verify. Syntax checks, test runs, linting after every change. 77% test pass rate across 61 tests.

10. Compounding — Agents that get better. Each solved problem becomes a skill. The agent today is qualitatively different from 5 months ago.


The Unglamorous Truth

The 3-day weekend experiment gets the attention — agents scaffolded 111 web parts and 5 backend services autonomously. But the real work was the months before and after:

  • Fixing macOS permissions so agents can read files
  • Tracing why the model config was broken (empty model name, nothing works for 2 hours)
  • Rewriting SCSS configuration because it was written for Gulp, not Heft
  • Discovering the Yeoman generator silently ignores CLI flags when .yo-rc.json exists
  • Hunting why C++ native modules won't compile on Node 22

None of this is in a tutorial. You live through it — late at night, no shortcut.


What's NOT Ready (Honest Part)

ThingStatus
5 domains live
3 APIs serving real data
153 skills queryable
10-pattern methodology documented
Courses❌ Content written, not launched
Workshops❌ Materials in planning
Consulting❌ 0 clients
Paying customers❌ 0

Everything with a price tag says "Coming Soon." I'm not selling anything until the methodology is proven with real users. This is pre-revenue, pre-launch, pre-everything commercial. I'm shipping infrastructure, not promises.


The Bigger Play

Everyone's building agents. I'm building infrastructure FOR agents.

Agents need shared knowledge (FactBase). They need verified configurations (Blueprints). They need to know what breaks and how to fix it (Pitfalls). They need to hand off work without losing context (Handoff Protocol). They need a way to discover documentation (llms.txt).

These are the picks and shovels of the agent gold rush. And most people haven't realised the gold rush needs picks and shovels yet.


What's Next

If this resonates — if you're building agent infrastructure too, or if you've hit the same walls — I'd like to hear about it. The pitfall registry is live and open. The skill registry is queryable. The methodology is documented.

Everything at workswithagents.com, workswithagents.dev, workswithagents.io.

Built in Cardiff. Running in Nuremberg. €4.57/month.


No launch announcement. No pricing page. No "revolutionise your workflow." Just infrastructure, live, and honest about what's not ready yet.

More from this blog

W

Works With Agents

26 posts