Prashant Chandel
2025-12-181 min read

AI Agents

The Only Axes That Matter for AI Agents

Most “AI agent” discussions fail because they skip the control system.

All agent designs can be located on four axes:

  1. Feedback timing When does reality constrain the model? • None • Post-hoc • Step-level (immediate)

  2. State persistence What survives beyond a single token window? • None • Ephemeral (in-run only) • Persistent (cross-run)

  3. Execution authority Who decides what runs next? • Model-only • Orchestrator-controlled • Hybrid

  4. Failure boundary What limits damage or waste? • Token limits • Critics / evaluators • Idempotency, rollback, sandboxing

These axes explain why two systems using the same model behave radically differently.

Agent capability does not come from autonomy. It comes from where feedback enters, where state lives, and how failure is bounded.

Everything else is an implementation detail.

  1. Single-Shot Generation (Baseline)

Axes • Feedback: None • State: None • Authority: Model • Failure boundary: Token limit

Pattern Input → Output

Example Summarization prompt in ChatGPT with no tools.

Failure Hallucinations undetectable. No recovery.

  1. Toolformer / One-Shot Tool Use

Axes • Feedback: Post-hoc • State: None • Authority: Model • Failure boundary: Tool timeout

Pattern Input → Tool call → Output

Example LLM calls a calculator or search API once.

Failure Wrong parameters. Tool output not re-checked.