Skip to content
AUTH

Manager–Worker Pattern

The Manager–Worker pattern (also called supervisor-worker or hierarchical orchestration) is one of the simplest and most effective ways to coordinate multiple agents.

In this architecture:

User Request
Manager Agent (Coordinator)
┌─────────────┬─────────────┬─────────────┐
│ │ │
Researcher Coder Analyst Tester

The manager is responsible for:


How the Manager–Worker Pattern Works

The typical execution flow is:

  1. User submits a high-level goal.
  2. Manager decomposes the goal into subtasks.
  3. Manager assigns each subtask to the most suitable worker.
  4. Worker executes the subtask (using tools via MCP, memory, and its specialized logic).
  5. Worker returns results to the manager.
  6. Manager evaluates results, updates shared state, and decides the next step.
  7. Process repeats until the goal is achieved.

This creates a clear, centralized control flow while allowing parallel or sequential execution of subtasks.


Example: AI Chip Market Research System

Goal: “Produce a comprehensive report on the 2026 AI chip market, including key players, performance trends, and investment recommendations.”

Manager breakdown:

The manager maintains shared context (using semantic + episodic memory) and can call workers multiple times if needed.


Key Components

ComponentResponsibilityTypical Capabilities
Manager AgentDecomposition, orchestration, quality controlStrong reasoning, planning, memory
Worker AgentsExecution of specialized subtasksFocused tools, domain-specific memory

Workers are usually lighter and more focused, while the manager is given stronger reasoning capabilities and access to the full conversation history.


Task Decomposition and Decision Making

The manager typically uses an LLM to:

Modern implementations often combine this with procedural memory (predefined decomposition templates) and reflection steps.


Example Implementation (Simplified)

from langgraph.graph import StateGraph, END
def manager_node(state):
# Use LLM to decide next action or worker
decision = llm.invoke(
f"Goal: {state['goal']}\nCurrent state: {state}\n"
f"Available workers: {list(workers.keys())}\nNext step?"
)
if "final_answer" in decision.lower():
return {"final_output": state["accumulated_results"]}
# Assign to worker
worker_name = extract_worker(decision)
result = workers[worker_name].execute(state)
state["accumulated_results"].append(result)
return state
# The graph runs the manager in a loop until termination condition

Real systems often add reflection, retry logic, and shared memory between steps.


Advantages of the Manager–Worker Pattern


Limitations and Trade-offs

Best practices:


When to Choose Manager–Worker

Use this pattern when:

For more dynamic or exploratory tasks, flatter patterns (like Handoff/Swarm) often perform better.


Looking Ahead

In this article we explored the Manager–Worker pattern — a centralized orchestration model where a manager agent decomposes goals and coordinates specialized workers.

In the next article we will examine the Handoff Pattern (Swarm), a more decentralized architecture where agents can directly delegate tasks to one another without a permanent central coordinator.

→ Continue to 6.3 — Handoff Pattern (Swarm)