This book is a collection of 97 short essays, each built around a single idea. No chapters that depend on chapters before them. No progressive argument that requires you to start at the beginning. Open anywhere and find something useful.

The first five parts are for developers who build agentic systems — understanding their probabilistic nature, treating prompts as engineering, designing reliable architectures, deploying to production, and developing the practitioner's mindset. The sixth part is for developers who use an AI assistant — Claude Code, Cursor, Copilot — as a daily tool for writing software.

If you use an AI assistant daily, start with Part 6. If you're building agentic systems, start with Part 1.

Preface

97 short essays on what it takes to work with AI agents — building them, using them, and thinking clearly about both.

Part 1 — Working with Agents

Mental models, probabilistic systems, context, debugging, and human oversight.

  1. 1. Agents Are Not Magic, They Are Probability An agent does not know things.
  2. 2. The Prompt Is the Architecture Most developers treat the prompt as an afterthought — a thing you write once, probably badly, then tweak when something breaks.
  3. 3. Your Agent Is Only as Good as Its Context Garbage in, garbage out is one of the oldest principles in computing.
  4. 4. Stop Anthropomorphizing, Start Debugging When an agent does something unexpected, developers reach for human explanations.
  5. 5. Trust the Output, Not the Reasoning Chain-of-thought reasoning is genuinely useful — it improves output quality, makes the agent's process more legible, and gives you something to debug when things go wrong.
  6. 6. Agents Fail Gracefully or They Don't — There Is No Middle Most systems fail on a spectrum.
  7. 7. The Human in the Loop Is a Feature, Not a Weakness There's a version of the agentic future where automation is the goal and human intervention is the failure mode — every step that requires a person to review, approve, or correct is friction to be ...
  8. 8. Know When to Use an Agent and When to Use a Function Agents are impressive enough that it's tempting to use them for everything.
  9. 9. Determinism Is a Choice You Have to Make on Purpose By default, language models are non-deterministic.
  10. 10. Your Agent Has No Memory Unless You Give It One Every time you call a language model, it starts fresh.
  11. 11. Give Your Agent a Role, Not Just a Task There's a difference between telling an agent what to do and telling an agent what it is.
  12. 12. Ambiguity Is Your Problem, Not the Agent's When an agent produces an output that isn't what you wanted, the temptation is to say the prompt was ambiguous.
  13. 13. The Specification Is the Skill The developers who get the most out of agents aren't the ones who know the most about models.
  14. 14. Review Agent Output Like You Review a Junior's Pull Request The right mental model for reviewing agent output isn't proofreading — it's code review.
  15. 15. Conversation Is a Development Environment The conversational interface to a language model isn't just a way to get answers — it's a place to think.

Part 2 — Prompting as Engineering

Versioning, examples, negative constraints, system prompts as contracts.

  1. 16. Prompts Drift — Version Them Like Code A prompt that works today will not necessarily work tomorrow.
  2. 17. Examples Outperform Instructions If you want an agent to produce output in a particular format, style, or structure, showing it an example is almost always more effective than describing what you want.
  3. 18. Negative Space Matters — Tell Your Agent What Not to Do Most prompts describe what the agent should do.
  4. 19. System Prompts Are Contracts A system prompt isn't instructions — it's a contract.
  5. 20. The Best Prompt Is the One You Don't Have to Change Prompt engineering has a reputation for being iterative — you write something, see what breaks, fix it, repeat.
  6. 21. Few-Shot Is Not Fine-Tuning Few-shot prompting — providing examples in the context window to shape model behavior — is powerful and widely used.
  7. 22. Chain of Thought Is a Debugging Tool, Not Just a Performance Trick Chain-of-thought prompting — asking the model to reason through a problem step by step before producing an answer — reliably improves performance on complex tasks.
  8. 23. Prompting Is Thinking Out Loud — So Think Carefully There's a reason bad prompts produce bad outputs: they're usually the product of fuzzy thinking.
  9. 24. The Agent That Sounds Confident Is Not Necessarily Correct Language models are fluent by default.
  10. 25. Learn to Recognize Hallucination Patterns in Your Domain Hallucination — the model generating plausible-sounding content that isn't grounded in fact — is not random.

Part 3 — Building Agentic Systems

Observability, evals, tool design, state, retries, cost, and kill switches.

  1. 26. Design for Observability Before You Design for Capability The most capable agentic system you can't observe is worth less than a less capable one you can see inside.
  2. 27. Evals Are Your Test Suite Every serious software project has tests.
  3. 28. The Tool Is the Interface When you give an agent a tool, you're not just extending its capabilities — you're defining the boundary between what the agent decides and what the world does.
  4. 29. Idempotency Matters More in Agentic Systems Than Anywhere Else Idempotency — the property that calling something multiple times produces the same result as calling it once — is a good practice in any distributed system.
  5. 30. Don't Let Your Agent Touch Production Until It's Bored You in Staging There's a moment in every agentic project where the system is working well enough in testing that the temptation to deploy becomes almost irresistible.
  6. 31. Small Agents Beat Big Agents The instinct when building agentic systems is to make the agent capable of everything.
  7. 32. Orchestration Is Just Plumbing — Treat It That Way Orchestration frameworks have a way of becoming the center of attention in agentic systems.
  8. 33. State Is the Hardest Problem in Agentic Programming Every hard problem in distributed systems eventually reduces to state.
  9. 34. The Retry Loop Is Where Systems Go to Die Retry logic is necessary.
  10. 35. Your Agent Needs a Kill Switch Every agentic system that operates with any degree of autonomy needs a way to stop it immediately — not gracefully, not after the current task completes, but now.
  11. 36. Log Everything Your Agent Was Thinking, Not Just What It Did Action logs are necessary but not sufficient.
  12. 37. Timeouts Are Not Optional Every external call your agent makes — to a model API, to a tool, to a database, to a third-party service — needs a timeout.
  13. 38. Cost Is an Architectural Constraint Token costs have a way of surprising teams that didn't plan for them.
  14. 39. Context Windows Are Budgets — Spend Them Wisely A context window isn't infinite space — it's a budget.
  15. 40. The Best Agents Have a Narrow Personality A general-purpose agent sounds like the goal.

Part 4 — Agents in the Real World

Users, latency, security, prompt injection, scope, and multi-agent systems.

  1. 41. Users Will Break Your Agent in Ways You Cannot Predict You can spend weeks testing an agent against every scenario you can imagine, and a user will break it on day one with an input you never considered.
  2. 42. Latency Is a UX Problem, Not Just an Infrastructure Problem A model call takes time.
  3. 43. Never Let an Agent Send an Email It Cannot Unsend The irreversibility of actions is the most important dimension of agentic system design, and it's the one that gets the least attention before something goes wrong.
  4. 44. Scope Creep Kills Agents — Define the Mission Narrowly Every successful agent faces the same pressure: it works, so people want it to do more.
  5. 45. Multi-Agent Systems Multiply Capability and Multiply Failure Modes The case for multi-agent systems is compelling.
  6. 46. The Agent That Does Everything Does Nothing Well There's a fantasy version of an agentic system where one agent handles everything — any question, any task, any domain, with equal competence across all of them.
  7. 47. Security Starts with What You Put in the Context Window The context window is the most sensitive surface in an agentic system.
  8. 48. Prompt Injection Is the New SQL Injection In the early days of web development, SQL injection was the vulnerability everyone knew about and half the teams ignored.
  9. 49. Your Agent Will Agree with You — That's the Problem Language models are trained to be helpful, and helpfulness has a bias toward agreement.
  10. 50. Switching Models Is Switching Collaborators When a new model releases with better benchmark scores, the temptation is to swap it in and claim the improvement.
  11. 51. Know What Your Agent Cannot Know Every agent has an epistemic boundary — a line between what it can know and what it cannot.
  12. 52. Working with Agents Gets Better When You Get Better at Writing The developers who get the most out of agents tend to be unusually good writers.
  13. 53. You Are Responsible for Everything the Agent Does When an agent makes a mistake — gives wrong information, takes a harmful action, produces output that damages a user's interests — the question of responsibility has a clear answer.

Part 5 — Mindset

Expertise, writing, uncertainty, responsibility, and thinking clearly.

  1. 54. You Are the Senior Developer — The Agent Is the Junior The most useful mental model for working with agents isn't "tool" and it isn't "collaborator" — it's "junior developer." Capable, fast, knowledgeable across a broad surface area, genuinely helpful ...
  2. 55. Agentic Programming Rewards the Lazy Thinker Lazy, here, is a technical term.
  3. 56. The Goal Is Outcomes, Not Outputs An agent that produces a beautiful summary of a document hasn't succeeded.
  4. 57. Automate the Boring Parts, Stay Close to the Interesting Parts The highest-value use of an agent is freeing up human attention for the work that actually requires it.
  5. 58. Iteration Speed Is Your Competitive Advantage The developers who improve fastest in agentic programming are not the ones who think most carefully before they act — they're the ones who act, observe, and adjust in the shortest cycles.
  6. 59. The Field Is Moving — Your Mental Models Must Too The mental models you built six months ago are already partially wrong.
  7. 60. Learn to Read Failure Like a Detective, Not a Judge When an agent fails, the instinct is to assign blame.
  8. 61. Agentic Programming Is a Discipline, Not a Shortcut The pitch for agentic programming often sounds like a promise of less work.
  9. 62. The Hardest Skill Is Knowing When to Take Back the Wheel Delegation to an agent is easy.
  10. 63. Discomfort with Uncertainty Is a Liability in This Field Agentic systems are probabilistic, the field is young, and the right answer to many important questions is genuinely unknown.
  11. 64. Expertise Still Matters — It Just Shows Up Differently Now There's a version of the agentic future where expertise is devalued — where the gap between the expert and the novice closes because both can prompt an agent to do the work.
  12. 65. The Best Practitioners Are Editors, Not Just Authors Writing and editing are different skills, and most developers are much better at one than the other.
  13. 66. Patience with Ambiguity Is a Technical Skill Ambiguity is uncomfortable.
  14. 67. Stay Curious About Failure Failure in agentic systems is information.
  15. 68. Agentic Systems Expose Gaps in Your Own Thinking One of the less-discussed effects of working with agents is how clearly they reveal the places where your own thinking is incomplete.
  16. 69. The Field Rewards Generalists Who Go Deep on One Thing Agentic programming sits at the intersection of software engineering, system design, language and communication, domain expertise, and product thinking.
  17. 70. Don't Mistake Fluency for Understanding You can become fluent with agentic systems without understanding them.
  18. 71. Build for the Agent You Have, Not the Agent You Wish You Had Every developer working with agents has a gap between the current capabilities of the tools they're using and the capabilities they wish those tools had.
  19. 72. The First Version Should Be Embarrassingly Simple Every lasting principle in software has a version of this at its core.

Part 6 — The Developer as User

Using AI coding assistants: context, architecture, large projects, and control.

  1. 73. Your IDE Is Now a Conversation The developers who get the least out of AI coding assistants are the ones who use them like autocomplete — they wait for a suggestion, accept it or reject it, and move on.
  2. 74. Give the Assistant Your Constraints, Not Just Your Requirements "Write a function that parses this config file" produces something.
  3. 75. Read Every Line It Writes The speed of generation is the trap.
  4. 76. The Assistant Doesn't Know Your Codebase Unless You Show It Every session starts fresh.
  5. 77. Use It to Understand, Not Just to Produce The most underused capability of an AI coding assistant isn't code generation — it's explanation.
  6. 78. Commit Often, So You Have Somewhere to Return To Working with an AI coding assistant changes the rhythm of development.
  7. 79. The Best Use of an AI Assistant Is the Task You Were About to Skip Every codebase has work that everyone knows should be done and nobody does.
  8. 80. Don't Let the Assistant Drive the Architecture The assistant is excellent at implementing decisions.
  9. 81. Context Is a Skill You Can Improve Knowing what context to provide — and how to provide it — is the most leveraged skill in working with an AI coding assistant.
  10. 82. An AI Pair Programmer Has No Ego — Use That Human pair programming is valuable and comes with friction.
  11. 83. Start Your Prompt with the Outcome, Not the Method "Refactor this function" is a method instruction.
  12. 84. Show the Assistant What Good Looks Like in Your Codebase Abstract instructions produce generic code.
  13. 85. When the Output Is Wrong, Fix the Prompt Before You Fix the Code When the assistant produces code that isn't quite right, the instinct is to edit the code directly — it's faster, it's familiar, it produces the result you need immediately.
  14. 86. Break Large Tasks into Prompts, Not Just Steps A prompt asking for five hundred lines of code is asking the assistant to make dozens of design decisions without knowing which ones you've already made, which ones are constrained by the rest of t...
  15. 87. Tell the Assistant What to Preserve, Not Just What to Change Every prompt implicitly asks the assistant to optimize for the goal you stated.
  16. 88. Use the Assistant to Pressure-Test Your Own Ideas Before you commit to an implementation approach, describe it to the assistant and ask what could go wrong.
  17. 89. Large Projects Need a Document the Assistant Can Always Read On a small task, the context you need fits in a prompt.
  18. 90. Write the Spec Before You Write the Prompt For a small task — fix this bug, add this field — the prompt can be the spec.
  19. 91. Let the Assistant Write the Plan, Then Edit It When you're starting a substantial piece of work, ask the assistant to write an implementation plan before writing any code.
  20. 92. Use Markdown, Not Prose, for Specifications A specification written as flowing prose is hard to reference, hard to update, and hard to provide as context.
  21. 93. Treat Your CLAUDE.md Like a Hiring Document Claude Code reads a `CLAUDE.md` file at the start of every session.
  22. 94. Break the Project into Phases the Assistant Can Complete A project described as a single continuous flow is hard to work on with an AI assistant.
  23. 95. Keep a Decision Log the Assistant Can Reference Why did you choose this database over the alternatives? Why is the authentication layer structured this way? Why does this module have this interface rather than the more obvious one? If these deci...
  24. 96. Let the Tests Define the Contract, Then Let the Assistant Fill It Writing tests before implementation isn't just a quality practice in an AI-assisted workflow — it's a communication protocol.
  25. 97. The Bigger the Project, the More You Need to Stay in Charge The temptation scales with the capability.