The Only Axes That Matter for AI Agents
Most “AI agent” discussions fail because they skip the control system.
All agent designs can be located on four axes:
-
Feedback timing When does reality constrain the model? • None • Post-hoc • Step-level (immediate)
-
State persistence What survives beyond a single token window? • None • Ephemeral (in-run only) • Persistent (cross-run)
-
Execution authority Who decides what runs next? • Model-only • Orchestrator-controlled • Hybrid
-
Failure boundary What limits damage or waste? • Token limits • Critics / evaluators • Idempotency, rollback, sandboxing
These axes explain why two systems using the same model behave radically differently.
Agent capability does not come from autonomy. It comes from where feedback enters, where state lives, and how failure is bounded.
Everything else is an implementation detail.
- Single-Shot Generation (Baseline)
Axes • Feedback: None • State: None • Authority: Model • Failure boundary: Token limit
Pattern Input → Output
Example Summarization prompt in ChatGPT with no tools.
Failure Hallucinations undetectable. No recovery.
- Toolformer / One-Shot Tool Use
Axes • Feedback: Post-hoc • State: None • Authority: Model • Failure boundary: Tool timeout
Pattern Input → Tool call → Output
Example LLM calls a calculator or search API once.
Failure Wrong parameters. Tool output not re-checked.