How to Build a Simple Chatbot Using LangGraph and LangSmith: A Step-by-Step Guide for Beginners
Are you excited to build your very own chatbot? Don’t worry if you’re new to coding — this guide will explain everything like we’re learning it for the first time! We’ll use LangGraph and LangSmith to create a chatbot that can talk back to you when you ask it questions. By the end, you’ll have your own simple chatbot ready to go!
What You’ll Learn in This Article:
- What is a chatbot?
- How to create a chatbot using LangGraph.
- How LangSmith helps make your chatbot work better.
What is a Chatbot?
A chatbot is a computer program that can have conversations with people. You can ask it questions, and it will respond as if it’s a real person! Chatbots are used in apps, websites, and even in video games to give helpful information or just chat for fun.
In this guide, we will create a basic chatbot step-by-step.
Step 1: Set Up Your Workspace
Before we start, we need to install some tools. These tools will help us build and run the chatbot.
First, let’s install two important tools: LangGraph and LangSmith. These are like the building blocks that help you create and test your chatbot.
Run this command to install them:
%%capture --no-stderr
%pip install -U langgraph langsmith
%pip install -U langchain_anthropic
Next, we need to set up something called an API key. Think of an API key as a special password that lets you use a service like Anthropic, which helps our chatbot talk back to us.
Here’s how to set it up:
import getpass
import os
def _set_env(var: str):
if not os.environ.get(var):
os.environ[var] = getpass.getpass(f"{var}: ")
_set_env("ANTHROPIC_API_KEY")
Step 2: Build the Chatbot’s Brain (StateGraph)
Imagine if our chatbot had a brain that helped it remember what we say and how to reply. That’s exactly what StateGraph does — it helps our chatbot think and remember the conversation.
We’ll start by telling the chatbot to remember every message we send it. Here’s how to build the chatbot’s “brain” or State:
from typing import Annotated
from typing_extensions import TypedDict
from langgraph.graph import StateGraph, START, END
from langgraph.graph.message import add_messages
class State(TypedDict):
# Store the messages in a list (conversation history)
messages: Annotated[list, add_messages]
graph_builder = StateGraph(State)
What is a State?
In simple words, State is like the chatbot’s memory. It remembers everything you say so that it can reply properly. We store the messages in something called a list, which is just a fancy way of storing multiple things in one place.
Step 3: Add a Chatbot Node
A node is like a little worker that does one job. In our chatbot, we will have a node that creates responses. To make our chatbot respond, we’ll use LangChain Anthropic (a special tool that helps the bot talk).
Here’s how to create the chatbot node:
from langchain_anthropic import ChatAnthropic
llm = ChatAnthropic(model="claude-3-haiku-20240307")
def chatbot(state: State):
return {"messages": [llm.invoke(state["messages"])]}
# Add the chatbot node to the graph builder
graph_builder.add_node("chatbot", chatbot)
What is a Node?
Think of a node like a robot inside your chatbot. When you send a message, the robot (node) will think about what you said and then decide how to respond.
Step 4: Tell the Chatbot When to Start and Stop
Now, we’ll teach our chatbot when to start talking and when to stop.
- Start: The chatbot will listen as soon as you type something.
- Stop: The chatbot will stop after it replies.
We tell the chatbot when to start and stop like this:
# Define where the chatbot should start
graph_builder.add_edge(START, "chatbot")
# Define where the chatbot should stop
graph_builder.add_edge("chatbot", END)
Step 5: Running the Chatbot
Now that our chatbot has a brain and knows how to start and stop, let’s bring it to life! We’ll run the chatbot so you can type messages and get replies. You can exit the chatbot anytime by typing “quit”, “exit”, or “q.”
# Compile the graph (prepare it to run)
graph = graph_builder.compile()
# Start the chatbot loop (keeps it running)
while True:
user_input = input("User: ")
print("User: " + user_input)
# Exit the chatbot when the user types 'q' or 'quit'
if user_input.lower() in ["quit", "exit", "q"]:
print("Goodbye!")
break
# Send the message to the chatbot and print the response
for event in graph.stream({"messages": [("user", user_input)]}):
for value in event.values():
print("Assistant:", value["messages"][-1].content)
What’s Happening Here?
- User Input: You type something like, “What’s LangGraph?”
- Chatbot Response: The chatbot thinks, then replies with information.
- Exit: If you want to stop chatting, just type “quit”, “exit”, or “q”.
Example Chatbot Conversation:
User: What's LangGraph all about?
Assistant: LangGraph is a project that helps people build chatbots using language data...
User: q
Goodbye!
What’s Next?
Congrats! You’ve built your first chatbot! 🎉
In the next steps, you can make your chatbot even smarter by:
- Adding tools like web search so the bot knows more.
- Using LangSmith to track and improve your chatbot’s responses.
- Creating more fun conversations by adding more nodes (tasks for the chatbot).
Here’s the full code if you want to see everything together:
# Full chatbot implementation code
from typing import Annotated
from typing_extensions import TypedDict
from langgraph.graph import StateGraph, START, END
from langgraph.graph.message import add_messages
from langchain_anthropic import ChatAnthropic
class State(TypedDict):
messages: Annotated[list, add_messages]
graph_builder = StateGraph(State)
llm = ChatAnthropic(model="claude-3-haiku-20240307")
def chatbot(state: State):
return {"messages": [llm.invoke(state["messages"])]}
graph_builder.add_node("chatbot", chatbot)
graph_builder.add_edge(START, "chatbot")
graph_builder.add_edge("chatbot", END)
graph = graph_builder.compile()
while True:
user_input = input("User: ")
print("User: " + user_input)
if user_input.lower() in ["quit", "exit", "q"]:
print("Goodbye!")
break
for event in graph.stream({"messages": [("user", user_input)]}):
for value in event.values():
print("Assistant:", value["messages"][-1].content)
Key Takeaways:
- Chatbots can talk to you like a real person.
- LangGraph helps build the chatbot’s brain using states and nodes.
- LangSmith is useful for improving the chatbot’s performance.
- By following these steps, you’ve built a simple chatbot!
Enhancing the Chatbot with Tools
Building a Smarter Chatbot: Using Tools
To make your chatbot smarter, we’ll integrate it with a web search tool. This tool helps the chatbot answer questions that it cannot handle just from memory. It allows the bot to search for relevant information online and provide better, more accurate responses to users.
Requirements: What You’ll Need
Before we jump into the code, let’s make sure we have everything set up:
- Install Required Packages: These are the tools needed to use the Tavily Search Engine and set the necessary API keys.
Run the following in your code environment:
%%capture --no-stderr
%pip install -U tavily-python
%pip install -U langchain_community
_set_env("TAVILY_API_KEY")
Setting up the Web Search Tool
Next, define the web search tool and tell the chatbot to use it when needed:
from langchain_community.tools.tavily_search import TavilySearchResults
tool = TavilySearchResults(max_results=2)
tools = [tool]
tool.invoke("What's a 'node' in LangGraph?")
The TavilySearchResults
tool performs a search, returns summaries, and helps the bot give more informed answers. We use a max of 2 search results for efficiency.
Connecting the Tool to the Chatbot
We now need to connect the search tool to the chatbot, ensuring the chatbot can use it when necessary. First, define the chatbot like we did in Part 1, but this time include a new command, bind_tools
. This tells the chatbot which tools it can use.
from typing import Annotated
from langchain_anthropic import ChatAnthropic
from typing_extensions import TypedDict
from langgraph.graph import StateGraph, START
from langgraph.graph.message import add_messages
class State(TypedDict):
messages: Annotated[list, add_messages]
graph_builder = StateGraph(State)
llm = ChatAnthropic(model="claude-3-haiku-20240307")
llm_with_tools = llm.bind_tools(tools)
Making the Chatbot Smarter with Tools
Now that we have connected the tools, let’s define a chatbot function that will handle both regular conversation and tool usage:
def chatbot(state: State):
return {"messages": [llm_with_tools.invoke(state["messages"])]}
graph_builder.add_node("chatbot", chatbot)
This code tells the chatbot to invoke the search tool whenever it detects that it needs more information.
Running Tools Dynamically
To make things work smoothly, we need a function that can handle running the tools. Here’s how we can define this functionality:
import json
from langchain_core.messages import ToolMessage
class BasicToolNode:
"""A node that runs the tools requested in the last AIMessage."""
def __init__(self, tools: list) -> None:
self.tools_by_name = {tool.name: tool for tool in tools}
def __call__(self, inputs: dict):
if messages := inputs.get("messages", []):
message = messages[-1]
else:
raise ValueError("No message found in input")
outputs = []
for tool_call in message.tool_calls:
tool_result = self.tools_by_name[tool_call["name"]].invoke(
tool_call["args"]
)
outputs.append(
ToolMessage(
content=json.dumps(tool_result),
name=tool_call["name"],
tool_call_id=tool_call["id"],
)
)
return {"messages": outputs}
This function allows the bot to decide when and how to use the tools and then process the results dynamically. When a user asks a question, and the bot can’t answer from its knowledge base, it will automatically invoke a web search using the TavilySearchResults
tool.
Routing Logic: What Happens Next?
Once the tool gives an answer, we need to make sure the chatbot knows what to do. Here’s how we define that logic:
from typing import Literal
def route_tools(state: State) -> Literal["tools", "__end__"]:
if isinstance(state, list):
ai_message = state[-1]
elif messages := state.get("messages", []):
ai_message = messages[-1]
else:
raise ValueError(f"No messages found in input state to tool_edge: {state}")
if hasattr(ai_message, "tool_calls") and len(ai_message.tool_calls) > 0:
return "tools"
return "__end__"
graph_builder.add_conditional_edges(
"chatbot",
route_tools,
{"tools": "tools", "__end__": "__end__"},
)
graph_builder.add_edge("tools", "chatbot")
graph_builder.add_edge(START, "chatbot")
graph = graph_builder.compile()
This code sets the logic to decide if the chatbot should use a tool (like the web search). If no tool is needed, the conversation simply ends.
Conclusion
Congrats! You’ve successfully added web search capabilities to your chatbot, making it much smarter and more versatile. It can now handle a wider range of user questions by retrieving up-to-date information from the web.
Next Steps
The chatbot still lacks memory, meaning it can’t recall past interactions. In the next part, we will work on adding memory so that it can hold coherent conversations over multiple turns.
Adding Memory to the Chatbot
Imagine you’re talking to a robot that remembers everything you say. Pretty cool, right? But at first, our robot (or chatbot) is a bit forgetful and doesn’t remember your past questions or conversations. This makes it hard to have a normal chat where you can ask follow-up questions.
LangGraph is like giving this robot a brain that can remember what you’ve said before. It uses something called checkpointing to save everything you say and the chatbot’s replies. Then, when you chat again, the robot can remember what you talked about earlier by checking its saved notes.
How Checkpointing Works
When you chat, LangGraph saves all the important information about the conversation at every step. Then, it checks the saved data whenever needed to ensure the conversation continues smoothly. Here’s how we can add memory to our chatbot.
Step 1: Setting up Memory
Let’s first create a special memory saver that will store the chat history.
from langgraph.checkpoint.memory import MemorySaver
memory = MemorySaver()
In our example, we’re using MemorySaver, which stores everything in the computer’s memory. But in real-world apps, you might want to save this data in a database like SQLite or PostgreSQL.
Step 2: Building the Chatbot Graph
We’ll now define how our chatbot works. The chatbot will get smarter by knowing how to use tools to answer user questions.
from langgraph.graph import StateGraph, START, END
from langgraph.prebuilt import ToolNode, tools_condition
from langchain_anthropic import ChatAnthropic
from langchain_community.tools.tavily_search import TavilySearchResults
from langgraph.graph.message import add_messages
from typing import Annotated, TypedDict
class State(TypedDict):
messages: Annotated[list, add_messages]
graph_builder = StateGraph(State)
tool = TavilySearchResults(max_results=2)
tools = [tool]
llm = ChatAnthropic(model="claude-3-haiku-20240307")
llm_with_tools = llm.bind_tools(tools)
def chatbot(state: State):
return {"messages": [llm_with_tools.invoke(state["messages"])]}
graph_builder.add_node("chatbot", chatbot)
tool_node = ToolNode(tools=[tool])
graph_builder.add_node("tools", tool_node)
graph_builder.add_conditional_edges("chatbot", tools_condition)
graph_builder.add_edge("tools", "chatbot")
graph_builder.add_edge(START, "chatbot")
Here, we use LangGraph to create a flow of conversation:
chatbot
node: Handles the conversation using AI.tools
node: Allows the chatbot to use tools like search engines.- Checkpointer: Saves the state after each step so the chatbot can recall past conversations.
Step 3: Compiling the Graph with Checkpointing
Finally, we add our memory saver to keep track of the conversation.
graph = graph_builder.compile(checkpointer=memory)
Step 4: Talking to the Chatbot
Now, we can talk to our chatbot! Let’s start with a simple conversation and see if it remembers your name.
config = {"configurable": {"thread_id": "1"}}
user_input = "Hi there! My name is Will."
events = graph.stream({"messages": [("user", user_input)]}, config, stream_mode="values")
for event in events:
event["messages"][-1].pretty_print()
Output:
================================ Human Message =================================
Hi there! My name is Will.
================================== Ai Message ==================================
It's nice to meet you, Will! I'm an AI assistant created by Anthropic. I'm here to help with any questions or tasks you may have. Please let me know how I can assist you today.
Now, let’s check if the chatbot remembers your name:
user_input = "Remember my name?"
events = graph.stream({"messages": [("user", user_input)]}, config, stream_mode="values")
for event in events:
event["messages"][-1].pretty_print()
Step 5: Starting a New Chat (Thread)
What if you start a new chat? Just change the thread_id.
events = graph.stream(
{"messages": [("user", user_input)]},
{"configurable": {"thread_id": "2"}},
stream_mode="values",
)
Output:
================================ Human Message =================================
Remember my name?
================================== Ai Message ==================================
I'm afraid I don't actually have the capability to remember your name specifically. As an AI assistant, I don't have a persistent memory of individual users or their names.
The chatbot forgets everything because we started a new thread (or new conversation). If you switch back to the previous thread, it will remember everything again!
Conclusion: A Smarter Chatbot with Memory
By adding checkpointing with LangGraph, we’ve made our chatbot much smarter! It now remembers what we talked about, making conversations more natural and useful.
In future steps, we can make the chatbot even more advanced by adding features like error handling and human oversight.
Human in the loop
Sometimes, our chatbot needs a little help from humans to make sure it’s doing things correctly. This is called human-in-the-loop — it’s when a human steps in to guide the chatbot when needed. For example, if the chatbot needs approval before using a tool or doing something important, a human can check things out first.
In this part, we’ll make sure our chatbot waits for a human before doing certain tasks, like running a tool. Let’s see how we can do this with LangGraph!
What’s the Plan?
We will add a new feature to the chatbot from Part 3 to make it wait for human input before using any tools. Once the human approves, the chatbot will continue doing its job.
Here’s how we can add this human-in-the-loop feature.
Starting with the Code
We’ll begin by copying the existing code from Part 3 where our chatbot could remember things and use tools. Below is the setup for our chatbot.
from typing import Annotated
from langchain_anthropic import ChatAnthropic
from langchain_community.tools.tavily_search import TavilySearchResults
from typing_extensions import TypedDict
from langgraph.checkpoint.memory import MemorySaver
from langgraph.graph import StateGraph, START
from langgraph.graph.message import add_messages
from langgraph.prebuilt import ToolNode, tools_condition
# Create a memory saver
memory = MemorySaver()
# Define the state of the chatbot
class State(TypedDict):
messages: Annotated[list, add_messages]
graph_builder = StateGraph(State)
# Define a tool the chatbot can use
tool = TavilySearchResults(max_results=2)
tools = [tool]
llm = ChatAnthropic(model="claude-3-haiku-20240307")
llm_with_tools = llm.bind_tools(tools)
# Define the chatbot's behavior
def chatbot(state: State):
return {"messages": [llm_with_tools.invoke(state["messages"])]}
# Add the chatbot and tools to the graph
graph_builder.add_node("chatbot", chatbot)
tool_node = ToolNode(tools=[tool])
graph_builder.add_node("tools", tool_node)
# Add conditions and connections between the chatbot and tools
graph_builder.add_conditional_edges("chatbot", tools_condition)
graph_builder.add_edge("tools", "chatbot")
graph_builder.add_edge(START, "chatbot")
Adding Human Control with Interruptions
Now, we’ll make the chatbot pause and wait for a human to approve before using any tools. We’ll do this by telling the chatbot to “interrupt” before running a tool. Here’s the code to set this up:
# Compile the graph and add a pause before using tools
graph = graph_builder.compile(
checkpointer=memory,
interrupt_before=["tools"],
)
In this code, the interrupt_before
feature tells the chatbot to stop before using a tool (in our case, the tools
node). This gives a human the chance to review and decide whether to continue or not.
Let’s Test It Out!
Now that we’ve added the pause, let’s give our chatbot a task. We’ll ask it to research LangGraph, and the chatbot will pause before using the tool to search.
user_input = "I'm learning LangGraph. Could you do some research on it for me?"
config = {"configurable": {"thread_id": "1"}}
# The chatbot will pause before using the tool
events = graph.stream(
{"messages": [("user", user_input)]}, config, stream_mode="values"
)
for event in events:
if "messages" in event:
event["messages"][-1].pretty_print()
Here’s what happens:
- You ask the chatbot to search for LangGraph.
- It will pause and show you that it’s about to search for LangGraph using the tool.
- You (the human) decide if the chatbot can continue.
Checking the Pause
Once the chatbot pauses, we can check its state to confirm it’s waiting for approval.
snapshot = graph.get_state(config)
snapshot.next
This will show us that the chatbot is waiting to use the tool. If we’re happy with what it’s about to do, we can let it continue:
# Let the chatbot continue without adding anything new to the state
events = graph.stream(None, config, stream_mode="values")
for event in events:
if "messages" in event:
event["messages"][-1].pretty_print()
What Happens Next?
Once the chatbot is allowed to continue, it will run the tool, get the search results, and show them to you. Here’s an example of what the chatbot might find about LangGraph:
LangGraph is a framework for building language-based AI agents and applications using language models. It provides a modular, graph-based approach for creating chatbots, code assistants, planning agents, and other language-centric applications.
Some key things I learned about LangGraph:
- It's designed for creating advanced AI applications with stateful, multi-actor workflows.
- LangGraph integrates with LangChain for more powerful features.
- It provides examples and tutorials to help you get started.
Wrapping Up
By adding a human-in-the-loop step, we can ensure that the chatbot gets human approval before taking certain actions. This is useful when you want to double-check what the chatbot is doing or require human oversight for important tasks.
Now, you know how to:
- Pause the chatbot before running tools.
- Approve or deny actions to make sure things run smoothly.
- Use the interrupt_before feature for human-in-the-loop control.
This approach opens up many possibilities, such as creating custom user interfaces where a human can control the chatbot’s actions.
Manually Updating the State
In the previous section, we showed how to pause a graph so a human could inspect its actions, allowing them to read the state. However, sometimes a user may want to change the agent’s course. This is where LangGraph’s ability to manually update the state becomes extremely useful.
Why Update State?
Updating the state allows you to modify the agent’s trajectory by changing its actions. This can be useful for:
- Correcting mistakes the agent has made.
- Exploring alternative paths.
- Guiding the agent towards a specific goal.
Let’s demonstrate how to update a checkpointed state. We’ll start with the same graph definition we used earlier.
from typing import Annotated
from langchain_anthropic import ChatAnthropic
from langchain_community.tools.tavily_search import TavilySearchResults
from typing_extensions import TypedDict
from langgraph.checkpoint.memory import MemorySaver
from langgraph.graph import StateGraph, START
from langgraph.graph.message import add_messages
from langgraph.prebuilt import ToolNode, tools_condition
class State(TypedDict):
messages: Annotated[list, add_messages]
graph_builder = StateGraph(State)
tool = TavilySearchResults(max_results=2)
tools = [tool]
llm = ChatAnthropic(model="claude-3-haiku-20240307")
llm_with_tools = llm.bind_tools(tools)
def chatbot(state: State):
return {"messages": [llm_with_tools.invoke(state["messages"])]}
graph_builder.add_node("chatbot", chatbot)
tool_node = ToolNode(tools=[tool])
graph_builder.add_node("tools", tool_node)
graph_builder.add_conditional_edges("chatbot", tools_condition)
graph_builder.add_edge("tools", "chatbot")
graph_builder.add_edge(START, "chatbot")
memory = MemorySaver()
graph = graph_builder.compile(
checkpointer=memory,
interrupt_before=["tools"],
)
user_input = "I'm learning LangGraph. Could you do some research on it for me?"
config = {"configurable": {"thread_id": "1"}}
events = graph.stream({"messages": [("user", user_input)]}, config)
for event in events:
if "messages" in event:
event["messages"][-1].pretty_print()
Modifying the State
Suppose we don’t want the tool to be called and instead want to provide the correct response manually. Here’s how you can update the state to bypass the tool call and modify the AI’s response directly:
from langchain_core.messages import AIMessage, ToolMessage
answer = "LangGraph is a library for building stateful, multi-actor applications with LLMs."
new_messages = [
ToolMessage(content=answer, tool_call_id=existing_message.tool_calls[0]["id"]),
AIMessage(content=answer),
]
new_messages[-1].pretty_print()
graph.update_state(
config,
{"messages": new_messages},
)
print("\n\nLast 2 messages;")
print(graph.get_state(config).values["messages"][-2:])
Updating State and Continuing the Graph
You can also update the state and tell the graph to treat it as if the update came from a specific node. For example:
graph.update_state(
config,
{"messages": [AIMessage(content="I'm an AI expert!")]},
as_node="chatbot",
)
Overwriting Existing Messages
Sometimes, instead of appending new messages, you may want to overwrite existing ones. To do this, you can modify the state using update_state()
while keeping the message ID unchanged. Here’s an example:
existing_message = snapshot.values["messages"][-1]
new_tool_call = existing_message.tool_calls[0].copy()
new_tool_call["args"]["query"] = "LangGraph human-in-the-loop workflow"
new_message = AIMessage(
content=existing_message.content,
tool_calls=[new_tool_call],
id=existing_message.id,
)
graph.update_state(config, {"messages": [new_message]})
Part 6: Customizing State
So far, we’ve relied on a simple state (it’s just a list of messages!). You can go far with this simple state, but if you want to define complex behavior without relying on the message list, you can add additional fields to the state. In this section, we will extend our chatbot with a new node to illustrate this.
In the examples above, we involved a human deterministically: the graph always interrupted whenever a tool was invoked. Suppose we wanted our chatbot to have the choice of relying on a human.
One way to do this is to create a passthrough “human” node, before which the graph will always stop. We will only execute this node if the LLM invokes a “human” tool. For our convenience, we will include an “ask_human” flag in our graph state that we will flip if the LLM calls this tool.
Below, define this new graph, with an updated State:
from typing import Annotated
from langchain_anthropic import ChatAnthropic
from langchain_community.tools.tavily_search import TavilySearchResults
from typing_extensions import TypedDict
from langgraph.checkpoint.memory import MemorySaver
from langgraph.graph import StateGraph, START
from langgraph.graph.message import add_messages
from langgraph.prebuilt import ToolNode, tools_condition
class State(TypedDict):
messages: Annotated[list, add_messages]
# This flag is new
ask_human: bool
Next, define a schema to show the model to let it decide to request assistance.
Using Pydantic with LangChain
This notebook uses Pydantic v2 BaseModel, which requires langchain-core >= 0.3
. Using langchain-core < 0.3
will result in errors due to mixing of Pydantic v1 and v2 BaseModels.
from pydantic import BaseModel
class RequestAssistance(BaseModel):
"""Escalate the conversation to an expert. Use this if you are unable to assist directly or if the user requires support beyond your permissions.
To use this function, relay the user's 'request' so the expert can provide the right guidance.
"""
request: str
Next, define the chatbot node. The primary modification here is flipping the ask_human
flag if we see that the chatbot has invoked the RequestAssistance
flag.
tool = TavilySearchResults(max_results=2)
tools = [tool]
llm = ChatAnthropic(model="claude-3-haiku-20240307")
# We can bind the llm to a tool definition, a pydantic model, or a json schema
llm_with_tools = llm.bind_tools(tools + [RequestAssistance])
def chatbot(state: State):
response = llm_with_tools.invoke(state["messages"])
ask_human = False
if (
response.tool_calls
and response.tool_calls[0]["name"] == RequestAssistance.__name__
):
ask_human = True
return {"messages": [response], "ask_human": ask_human}
Next, create the graph builder and add the chatbot and tools nodes to the graph, same as before.
graph_builder = StateGraph(State)
graph_builder.add_node("chatbot", chatbot)
graph_builder.add_node("tools", ToolNode(tools=[tool]))
Next, create the “human” node. This node function is mostly a placeholder in our graph that will trigger an interrupt. If the human does not manually update the state during the interrupt, it inserts a tool message so the LLM knows the user was requested but didn’t respond. This node also unsets the ask_human
flag so the graph knows not to revisit the node unless further requests are made.
from langchain_core.messages import AIMessage, ToolMessage
def create_response(response: str, ai_message: AIMessage):
return ToolMessage(
content=response,
tool_call_id=ai_message.tool_calls[0]["id"],
)
def human_node(state: State):
new_messages = []
if not isinstance(state["messages"][-1], ToolMessage):
# Typically, the user will have updated the state during the interrupt.
# If they choose not to, we will include a placeholder ToolMessage to
# let the LLM continue.
new_messages.append(
create_response("No response from human.", state["messages"][-1])
)
return {
# Append the new messages
"messages": new_messages,
# Unset the flag
"ask_human": False,
}
graph_builder.add_node("human", human_node)
Next, define the conditional logic. The select_next_node
will route to the human node if the flag is set. Otherwise, it lets the prebuilt tools_condition
function choose the next node.
Recall that the tools_condition
function simply checks to see if the chatbot has responded with any tool calls in its response message. If so, it routes to the action node. Otherwise, it ends the graph.
def select_next_node(state: State):
if state["ask_human"]:
return "human"
# Otherwise, we can route as before
return tools_condition(state)
graph_builder.add_conditional_edges(
"chatbot",
select_next_node,
{"human": "human", "tools": "tools", "__end__": "__end__"},
)
Finally, add the simple directed edges and compile the graph. These edges instruct the graph to always flow from node a -> b
whenever a
finishes executing.
# The rest is the same
graph_builder.add_edge("tools", "chatbot")
graph_builder.add_edge("human", "chatbot")
graph_builder.add_edge(START, "chatbot")
memory = MemorySaver()
graph = graph_builder.compile(
checkpointer=memory,
# We interrupt before 'human' here instead.
interrupt_before=["human"],
)
If you have the visualization dependencies installed, you can see the graph structure below:
from IPython.display import Image, display
try:
display(Image(graph.get_graph().draw_mermaid_png()))
except Exception:
# This requires some extra dependencies and is optional
pass
The chatbot can either request help from a human (chatbot -> select -> human
), invoke the search engine tool (chatbot -> select -> action
), or directly respond (chatbot -> select -> end
). Once an action or request has been made, the graph will transition back to the chatbot node to continue operations.
Let’s see this graph in action. We will request for expert assistance to illustrate our graph.
user_input = "I need some expert guidance for building this AI agent. Could you request assistance for me?"
config = {"configurable": {"thread_id": "1"}}
# The config is the **second positional argument** to stream() or invoke()!
events = graph.stream(
{"messages": [("user", user_input)]}, config, stream_mode="values"
)
for event in events:
if "messages" in event:
event["messages"][-1].pretty_print()
the goal is to demonstrate how you can “rewind” and resume the chatbot workflow at any previous checkpoint in LangGraph. This functionality enables the exploration of alternate paths, debugging, or correcting past mistakes — important for building complex systems like chatbots or autonomous agents.
Key Concepts:
- State Checkpoints: At each step in a chatbot’s workflow, LangGraph saves a checkpoint representing the state of the conversation, including messages, tool calls, and AI responses. These checkpoints allow you to roll back to a previous point in the workflow and “branch off” into a different path.
- Rewind and Resume: By fetching a particular checkpoint from the state history, you can replay the conversation from that point in time. The system can resume execution from any node in the graph, based on the captured state at that checkpoint.
How It Works:
- Setting Up State History: Each message, action, and response is saved into the
StateGraph
. This includes tool invocations, LLM responses, and any decisions made by the chatbot. By saving these states, LangGraph enables the possibility to rewind to a prior state and alter the flow of the interaction. - Getting State History: The
get_state_history()
method retrieves all checkpoints in the history. Each checkpoint contains:
values
: Contains the state (like messages).next
: Specifies the next node to execute in the graph.
3. Resuming Execution: Once a specific checkpoint is chosen, you can resume execution by providing its checkpoint_id
. The graph will then resume from that point, either continuing with the flow or branching into a new path based on user input or system conditions.
Example Flow:
- Initial Conversation:
- User asks: “Could you do some research on LangGraph for me?”
- The bot invokes the search tool, fetches results, and provides a summary.
2. Continuing Interaction:
- User responds: “Maybe I’ll build an autonomous agent with it!”
- The bot provides insights on how LangGraph can help build autonomous agents.
3. Rewind and Resume:
- After executing multiple steps, the conversation can be rolled back to an earlier checkpoint using the
checkpoint_id
. - This allows the chatbot to “time travel” and resume from a specific point, replaying the steps or enabling alternative outcomes.
Practical Applications:
- Debugging: Helps identify mistakes in a chatbot’s logic by rolling back to the exact point where an error occurred.
- Interactive User Experience: Users can “undo” actions, explore different responses, or replay their chatbot’s decisions.
- Complex Workflows: Enables the design of workflows where users can go back and tweak their inputs or decisions, crucial for systems like autonomous software agents.
This checkpoint-based “time travel” capability makes LangGraph a powerful tool for building flexible, stateful LLM applications, allowing for the exploration of alternative paths while ensuring robust debugging and user interaction.