Practical AI Agent Architectures
Concrete examples showing when to use Planner–Executor vs ReAct, common integration patterns, and starter pseudo-templates you can adapt to your stack.
Customer Support Triage (ReAct)
Pattern
High Adaptability
Use a ReAct loop to ingest a ticket, search knowledge bases, ask clarifying questions, and propose a resolution or route to a team.
- Tools: search_kb, get_user_history, create_followup_question, draft_reply
- Why ReAct: Myopic but fast feedback; each step can consult tools, then think again.
// Pseudocode
loop {
thought := think("What is missing to resolve this ticket?")
action := choose_tool(thought)
obs := run(action)
if resolved(obs) or routed(obs) { break }
}
reply_or_route()
ETL/Batch Orchestration (Planner–Executor)
Pattern
Cost‑Efficient
Generate a plan (DAG) once, then execute deterministically with retries, idempotency, and metrics.
- Tools: list_sources, extract, transform, load, validate, notify
- Why Planner–Executor: Low LLM call frequency; stable flows benefit from upfront plan.
// Plan
plan = [
step("extract", {src:"s3://raw"}),
step("transform", {job:"spark-clean"}),
step("load", {dst:"warehouse"}),
step("validate"),
]
execute(plan)
Research Assistant with RAG (ReAct + Tools)
Pattern
Explainable
Iteratively search, retrieve, and cite sources. Keep a running scratchpad for claims and evidence.
- Tools: web_search, retrieve_embeddings, cite_formatter
- Notes: Bound the loop (budget, steps) and require citations for final answers.
// Thought → Action → Observation cycles with budget
for i in 1..N {
t := think("what to verify next?")
a := pick([web_search, retrieve_embeddings])
o := run(a)
scratchpad.append(o)
}
final_answer := compose_with_citations(scratchpad)
Ticket Auto‑Fix (Hybrid)
Hybrid
Safety‑First
Planner drafts a remediation plan; each step is executed by a small ReAct loop with guardrails.
- Guardrails: dry‑run, policy checks, human approvals, sandbox execution
- Outputs: PR, test logs, rollback plan
plan := plan_fix(ticket)
for step in plan {
result := react_execute(step, guardrails)
if !result.ok { halt_and_request_handoff() }
}
open_pull_request()
Evaluation & Guardrails
Regardless of architecture, add offline evaluation and runtime checks:
- Unit prompts and golden datasets
- Hallucination and policy classifiers
- Budget/timeouts per task
- Structured outputs with JSON schema validation
Template Snippets
ReAct Loop (Generic)
function reactAgent(input, tools, budget=10) {
let scratchpad = []
for (let i=0; i<budget; i++) {
const thought = llm.think({input, scratchpad})
const action = policy.chooseTool(thought, tools)
const obs = env.run(action)
scratchpad.push({thought, action, obs})
if (policy.satisfied(scratchpad)) break
}
return llm.finalize({input, scratchpad})
}
Planner → Executor
function planAndExecute(goal, toolkit) {
const plan = llm.plan({goal, toolkit}) // DAG or ordered steps
for (const step of plan.steps) {
const res = executor.run(step, toolkit)
if (!res.ok) return escalate(res)
}
return success()
}