Skip to content
AUTH

Building Agents with LangGraph

As agents grew more sophisticated, the limitations of simple control loops became clear. Traditional loops (observe → reason → act) work for basic tasks but struggle with conditional logic, long-running workflows, parallel execution, and maintaining reliable state over many steps.

LangGraph solves these problems by letting developers build agents as stateful, graph-based workflows instead of implicit loops.


What Is LangGraph?

LangGraph is a framework for constructing directed graph-based AI agents. It makes the agent’s workflow explicit, controllable, and composable by modeling it as a graph of nodes and edges.

This design gives developers precise control while still allowing the LLM to drive intelligent decisions inside each node.


Core Concepts

Agent State

Everything in LangGraph revolves around a shared state object. It carries the goal, conversation history, tool results, intermediate findings, and final output.

from typing import TypedDict, Annotated
from typing_extensions import TypedDict
class AgentState(TypedDict):
goal: str
messages: Annotated[list[dict], "add_messages"]
tool_results: list[dict]
final_report: str | None

Each node reads from and updates this state.


Typical LangGraph Workflow

A common agent graph often mirrors the architecture we studied in Module 2:

User Input
Reasoning (Planner)
Tool Selection & Execution
Observation Processing
Reflection
↓ (cycle back if needed)
Final Answer

The graph supports cycles (for repeated reasoning) and conditional branches (for adaptive behavior).


Complete LangGraph Example

Here is a realistic, compilable example that implements a ReAct-style agent for comparing GPUs (RTX 4090 vs H100):

from typing import Literal, Annotated
from typing_extensions import TypedDict
from langchain_ollama import ChatOllama
from langchain_core.messages import HumanMessage, AIMessage, SystemMessage
from langgraph.graph import StateGraph, START, END
from langgraph.graph.message import add_messages
from langgraph.checkpoint.memory import MemorySaver
MODEL = "qwen3.5:9b"
llm = ChatOllama(model=MODEL, temperature=0.2)
# 1. Define the state
class AgentState(TypedDict):
goal: str
messages: Annotated[list, add_messages]
next: Literal["tools", "final_answer"]
final_report: str | None
# 2. Define nodes
def reasoning_node(state: AgentState) -> dict:
prompt = (
f"Goal: {state['goal']}\n\n"
"Decide the next action. If you need to look something up, say 'use tool'. "
"Otherwise provide a final_answer."
)
messages = [SystemMessage(content="You are a GPU expert.")] + list(state["messages"]) + [HumanMessage(content=prompt)]
response: AIMessage = llm.invoke(messages)
next_action = "tools" if "tool" in response.content.lower() else "final_answer"
return {
"messages": [response],
"next": next_action,
}
def tool_node(state: AgentState) -> dict:
# Simulated tool result
result = "H100 outperforms RTX 4090 in training throughput by 2-3x."
return {
"messages": [HumanMessage(content=f"Tool result: {result}")],
}
def final_answer_node(state: AgentState) -> dict:
history = "\n".join(m.content for m in state["messages"])
messages = [
SystemMessage(content="You are a GPU expert."),
HumanMessage(content=f"Summarize a clear final answer based on:\n{history}"),
]
response: AIMessage = llm.invoke(messages)
return {
"messages": [response],
"final_report": response.content,
}
# 3. Build the graph
workflow = StateGraph(AgentState)
workflow.add_node("reasoning", reasoning_node)
workflow.add_node("tools", tool_node)
workflow.add_node("final_answer", final_answer_node)
# 4. Define edges
workflow.add_edge(START, "reasoning")
def route_after_reasoning(state: AgentState) -> Literal["tools", "final_answer"]:
return state["next"]
workflow.add_conditional_edges(
"reasoning",
route_after_reasoning,
{"tools": "tools", "final_answer": "final_answer"},
)
workflow.add_edge("tools", "reasoning") # ReAct cycle
workflow.add_edge("final_answer", END)
# 5. Compile with checkpointing
graph = workflow.compile(checkpointer=MemorySaver())
# 6. Run
initial_state: AgentState = {
"goal": "Compare RTX 4090 and H100 for machine learning workloads",
"messages": [],
"next": "final_answer",
"final_report": None,
}
result = graph.invoke(initial_state, config={"configurable": {"thread_id": "1"}})
print("\n=== FINAL REPORT ===")
print(result["final_report"])

This example shows a full, production-ready pattern:


Key Features of LangGraph


Advantages of Graph-Based Agents

LangGraph represents the shift from prompt engineering to workflow engineering.


Looking Ahead

LangGraph shows how graph-based architectures overcome the limitations of traditional loops, making agents more reliable, observable, and production-ready.

In the next module we will explore Tool Use & Protocols, covering how to design robust tools, define schemas, handle errors, and work with emerging standards like the Model Context Protocol (MCP).

→ Continue to 4.1 — Why Tools Make Agents Powerful