Axiom
First Principles of
AI-Driven Software Development
The first-principles framework for engineers who build AI-driven systems that actually work in production — reliable, maintainable, and designed to evolve.
AI-Driven Software
Development
Building on
solid ground.
Modern AI-driven development demands a new mental model — not frameworks, not tools, but principles that hold true as models evolve. Click any card to explore it live with AI on your device.
Ask
Axiom.
Get answers grounded in the book's principles. The AI runs entirely in your browser — no server, no logs, no API keys.
Ask anything about AI-driven development. Or pick a starter below.
The Principle
Mapper.
Describe your AI project. Axiom identifies which core principles apply — and exactly how they shape your decisions.
Describe a project — a chatbot, an agent, a pipeline — and we'll surface the 3–5 principles that matter most.
The Prompt
Lab.
Paste any rough AI prompt. Axiom rewrites it applying the Prompt-as-Interface principle and explains every improvement. Iterate your way to precision.
The right model at the right time is luck. Clear reasoning about system design is a skill. Skills compound. Luck does not.
— From the Introduction of Axiom
Why Axiom
exists.
The defining condition of building software with AI is that the tools change faster than the time it takes to learn them properly. A tutorial written in January is outdated by April. The framework you mastered last quarter is forked or replaced by the next one.
Running faster does not solve the problem. What you need is a way to evaluate new developments quickly, make sound architectural decisions under uncertainty, and build systems that do not collapse when the layer beneath them changes. Axiom is that framework — twelve first principles that describe how AI-driven systems behave, not which tool to use this quarter.
This book is for software engineers and technical leaders building products that actually run in production. It assumes you know how to write software. It does not assume you have spent the last five years on the AI frontier. If you ask "why" before "how," Axiom is for you.
Six parts.
Twelve principles.
Most teams treat the model as the end goal. The model is infrastructure — extraordinary, replaceable, rented. The product is the experience, workflow, and integration you build on top of it. That is where durable value lives, and where competitive moats actually form.
Send the same prompt twice and you may get two different answers. That is not a bug; the system is sampling from a distribution. Reliability stops meaning guaranteed correctness and starts meaning consistent behavior across the inputs and conditions you actually face in production.
The context window is not an input field. It is the primary architectural surface in AI-driven applications. Treat it with the same rigor as a database schema — every token you place in it is a design decision with latency, cost, and quality consequences.
More context is not better context. Every irrelevant token competes for the model's attention and degrades output quality. Ruthless curation of what enters the window is one of the highest-leverage engineering disciplines you can practice — high signal, low noise, always.
A prompt is the contract between human intent and machine capability. Like an API signature, it must be explicit, versioned, tested, and built to encode assumptions, constrain scope, and define output formats. Vague prompts produce vague systems.
Long, monolithic prompts are the spaghetti code of AI systems — hard to debug, brittle to change. Break them into focused, composable units that can be assembled, tested, and evolved independently. A prompt that tries to do everything does nothing well.
Every agentic system needs clear, explicit boundaries — defined upfront, not discovered after a production incident. The biggest agentic mistake is granting too much room too soon. Define what the agent can and cannot do before deciding how autonomously it can act.
Full automation is not the finish line, and often not even the right goal. The most reliable AI systems know exactly where human judgment belongs in the workflow and design those handoffs intentionally — useful, not performative.
You cannot improve what you cannot measure, and you cannot test probabilistic outputs with deterministic unit tests. Define what good looks like before you build. Construct the eval harness alongside the system — it is the feedback loop that makes iteration possible.
AI systems fail in ways traditional software does not — hallucinations, drift, timeouts, confidence collapse. Design every path to degrade gracefully rather than catastrophically. A thoughtful fallback strategy is a feature, not an admission of defeat.
Coupling your architecture to a specific model is a liability that compounds with every release. Build a model-agnostic interface layer so that swapping providers, versions, or fine-tunes is a configuration change — not a rewrite.
Abstraction handles the structural problem of change. Drift handles the human side. The engineers who sustain their edge are not those who learn the most tools — they are those who build habits, architectures, and team cultures designed to absorb what comes next.
What the book
gives you.
"When your team debates whether to give an agent a new tool, you can ground the discussion in scoping frameworks and trust levels — instead of arguing from intuition."
"When output quality drops, you have a place to start. Check the context first, then the prompt, then the model behavior, then the eval calibration. Each principle points to a specific question to ask."
"When a new technique, tool, or model is released, assess it against the principles yourself. Does it improve the product layer? Does it respect probabilism? Does it compose with what you have?"
Build on principles
that don't expire.
Tools change. Models improve. The engineers who reason from first principles don't get certainty about the future — they get the ability to meet it well.