Building Agents with LangGraph
As agents grew more sophisticated, the limitations of simple control loops became clear. Traditional loops (observe → reason → act) work for basic tasks but struggle with conditional logic, long-running workflows, parallel execution, and maintaining reliable state over many steps.
LangGraph solves these problems by letting developers build agents as stateful, graph-based workflows instead of implicit loops.
What Is LangGraph?
LangGraph is a framework for constructing directed graph-based AI agents. It makes the agent’s workflow explicit, controllable, and composable by modeling it as a graph of nodes and edges.
- Nodes = discrete steps (reasoning, tool calling, observation processing, reflection, etc.)
- Edges = transitions between nodes (including conditional branching)
- State = shared data that flows through the entire graph
This design gives developers precise control while still allowing the LLM to drive intelligent decisions inside each node.
Core Concepts
Agent State
Everything in LangGraph revolves around a shared state object. It carries the goal, conversation history, tool results, intermediate findings, and final output.
from typing import TypedDict, Annotatedfrom typing_extensions import TypedDict
class AgentState(TypedDict): goal: str messages: Annotated[list[dict], "add_messages"] tool_results: list[dict] final_report: str | NoneEach node reads from and updates this state.
Typical LangGraph Workflow
A common agent graph often mirrors the architecture we studied in Module 2:
User Input ↓Reasoning (Planner) ↓Tool Selection & Execution ↓Observation Processing ↓Reflection ↓ (cycle back if needed)Final AnswerThe graph supports cycles (for repeated reasoning) and conditional branches (for adaptive behavior).
Complete LangGraph Example
Here is a realistic, compilable example that implements a ReAct-style agent for comparing GPUs (RTX 4090 vs H100):
from typing import Literal, Annotatedfrom typing_extensions import TypedDictfrom langchain_ollama import ChatOllamafrom langchain_core.messages import HumanMessage, AIMessage, SystemMessagefrom langgraph.graph import StateGraph, START, ENDfrom langgraph.graph.message import add_messagesfrom langgraph.checkpoint.memory import MemorySaver
MODEL = "qwen3.5:9b"
llm = ChatOllama(model=MODEL, temperature=0.2)
# 1. Define the stateclass AgentState(TypedDict): goal: str messages: Annotated[list, add_messages] next: Literal["tools", "final_answer"] final_report: str | None
# 2. Define nodesdef reasoning_node(state: AgentState) -> dict: prompt = ( f"Goal: {state['goal']}\n\n" "Decide the next action. If you need to look something up, say 'use tool'. " "Otherwise provide a final_answer." ) messages = [SystemMessage(content="You are a GPU expert.")] + list(state["messages"]) + [HumanMessage(content=prompt)] response: AIMessage = llm.invoke(messages) next_action = "tools" if "tool" in response.content.lower() else "final_answer" return { "messages": [response], "next": next_action, }
def tool_node(state: AgentState) -> dict: # Simulated tool result result = "H100 outperforms RTX 4090 in training throughput by 2-3x." return { "messages": [HumanMessage(content=f"Tool result: {result}")], }
def final_answer_node(state: AgentState) -> dict: history = "\n".join(m.content for m in state["messages"]) messages = [ SystemMessage(content="You are a GPU expert."), HumanMessage(content=f"Summarize a clear final answer based on:\n{history}"), ] response: AIMessage = llm.invoke(messages) return { "messages": [response], "final_report": response.content, }
# 3. Build the graphworkflow = StateGraph(AgentState)workflow.add_node("reasoning", reasoning_node)workflow.add_node("tools", tool_node)workflow.add_node("final_answer", final_answer_node)
# 4. Define edgesworkflow.add_edge(START, "reasoning")
def route_after_reasoning(state: AgentState) -> Literal["tools", "final_answer"]: return state["next"]
workflow.add_conditional_edges( "reasoning", route_after_reasoning, {"tools": "tools", "final_answer": "final_answer"},)workflow.add_edge("tools", "reasoning") # ReAct cycleworkflow.add_edge("final_answer", END)
# 5. Compile with checkpointinggraph = workflow.compile(checkpointer=MemorySaver())
# 6. Runinitial_state: AgentState = { "goal": "Compare RTX 4090 and H100 for machine learning workloads", "messages": [], "next": "final_answer", "final_report": None,}
result = graph.invoke(initial_state, config={"configurable": {"thread_id": "1"}})
print("\n=== FINAL REPORT ===")print(result["final_report"])This example shows a full, production-ready pattern:
- Shared typed state
- Multiple nodes
- Conditional routing
- Cycle (ReAct loop)
- Checkpointing via
MemorySaver(or a persistent store in production)
Key Features of LangGraph
- Cycles — Enable repeated reasoning/tool loops inside a structured graph.
- Conditional Branching — Dynamic path selection based on current state.
- Checkpointing — Automatically saves state after each node, allowing resume, debugging, and human-in-the-loop.
- Persistence — Works with Redis, Postgres, or in-memory stores for long-running agents.
Advantages of Graph-Based Agents
- Explicit and readable workflow
- Precise control and deterministic behavior
- Excellent debuggability (inspect any node/state)
- Built-in reliability through checkpointing
- Scalable composition of complex behaviors
LangGraph represents the shift from prompt engineering to workflow engineering.
Looking Ahead
LangGraph shows how graph-based architectures overcome the limitations of traditional loops, making agents more reliable, observable, and production-ready.
In the next module we will explore Tool Use & Protocols, covering how to design robust tools, define schemas, handle errors, and work with emerging standards like the Model Context Protocol (MCP).
→ Continue to 4.1 — Why Tools Make Agents Powerful