Building Multi-Agent AI Systems with LangGraph

Dec 20, 2024·2 min read·
AILangGraphLLMPython

The first generation of LLM applications was mostly prompt-in, response-out. Chain a few prompts, maybe call an API, return the result. That works for simple tasks but falls apart when you need an agent to plan, retry on failure, branch based on tool results, or coordinate with other agents.

LangGraph brings a directed graph model to LLM workflows. Nodes are functions (or LLM calls), edges define the flow, and state is explicitly managed as data that flows through the graph. This makes complex agent behaviour composable and debuggable.

Why Graphs Over Chains?

A chain executes linearly — A → B → C. A graph can do A → B → C, but also A → (B or C depending on a condition), or A → B → A (retry loops), or fan-out to multiple parallel nodes and merge results.

For an agent that needs to use tools, validate its own output, and decide whether to continue or hand off to a different agent — graphs are the right model.

A Minimal LangGraph Agent

from langgraph.graph import StateGraph, END
from langchain_core.messages import HumanMessage
from typing import TypedDict, List

class AgentState(TypedDict):
    messages: List
    next_step: str

def call_llm(state: AgentState) -> AgentState:
    # Call your LLM here, get a response
    response = llm.invoke(state["messages"])
    return {"messages": state["messages"] + [response], "next_step": "tool" if response.tool_calls else "end"}

def call_tool(state: AgentState) -> AgentState:
    # Execute tool calls from the last message
    results = execute_tools(state["messages"][-1].tool_calls)
    return {"messages": state["messages"] + results, "next_step": "llm"}

def router(state: AgentState) -> str:
    return state["next_step"]

graph = StateGraph(AgentState)
graph.add_node("llm", call_llm)
graph.add_node("tool", call_tool)
graph.add_conditional_edges("llm", router, {"tool": "tool", "end": END})
graph.add_edge("tool", "llm")
graph.set_entry_point("llm")

agent = graph.compile()

The graph makes the control flow explicit and inspectable. You can visualise it, add checkpoints for human-in-the-loop pauses, or swap out individual nodes without rewiring everything.

What This Enables in Production

At 1cell.ai, we use LangGraph to build a discovery agent for clinical and genomic datasets. The agent receives a natural language query, routes to the appropriate specialist sub-agent (schema lookup, SQL generation, result validation), and assembles a response with dynamic D3 visualisation config.

The graph model made it straightforward to:

  • Add a validation node that retries the SQL generation step if the query fails
  • Branch to a "clarification" node when the query is ambiguous
  • Instrument each node independently for observability

Where LangGraph Fits

LangGraph is the right tool when your LLM workflow needs:

  • Cycles — retry loops, self-correction
  • Conditional branching — different paths based on tool results or model output
  • Multi-agent coordination — supervisor agents delegating to specialist agents
  • Checkpoints — persisting state for long-running or human-in-the-loop workflows

For simple linear pipelines, a basic chain is still fine. LangGraph's value shows up as workflows get complex.