LangGraph Travel Assistant — AI-Driven Tool Routing with LLM Decision Making
This example demonstrates how to build a travel assistant using LangGraph and LLM-based tool orchestration — where each tool is represented as an independent node in the workflow graph.
Instead of hardcoding if/else logic for when to call a specific function, we let the LLM act as a router that decides which tool (if any) to invoke based on the user’s natural-language input. The flow dynamically routes through different tool nodes, executes them, and then summarizes or responds using the LLM.
🧩 Architecture Overview
User → Router (LLM)
├─→ GetCitiesTool → Respond
├─→ GetHotelsTool → Respond
└─→ Respond (Direct LLM Answer)
🔹 Components
- Router Node – Uses the LLM to interpret user intent and decide the next node (
get_cities,get_hotels, orrespond). - Tool Nodes – Represent individual “MCP-style” helper tools, each encapsulated in its own graph node.
- Respond Node – Summarizes tool results or answers directly if no tool was used.
- StateGraph – Manages transitions between nodes and retains conversation state.
- Conditional Edges – Define which node to jump to based on the router’s output (
next_tool).
🧠 What It Demonstrates
- Natural-language decision-making using LLMs.
- Structured, modular orchestration of multiple tools.
- Declarative control flow using LangGraph’s state-based transitions.
- A hybrid approach where the LLM can either use its own knowledge or call tools when needed.
💬 Example Interactions
User: Show me cities in Japan.
→ 🤖 Cities in Japan include Tokyo, Kyoto, and Osaka.
User: Find me hotels in Kyoto.
→ 🤖 Some great hotels in Kyoto are The Thousand Kyoto and Hoshinoya Kyoto.
User: When is the best time to visit Japan?
→ 🤖 The best time to visit Japan is during spring or autumn.
This approach cleanly separates logic, tool execution, and decision-making — providing the flexibility of AI-driven routing with the structure and observability of a graph-based workflow.
from langgraph.graph import StateGraph, END
from langgraph.checkpoint import MemorySaver
from langchain_openai import ChatOpenAI
from typing import TypedDict, Optional
import json
# ----------------------------
# Shared State
# ----------------------------
class TravelState(TypedDict):
user_input: str
next_tool: Optional[str]
tool_result: Optional[str]
tool_params: Optional[dict]
# ----------------------------
# Sample Tool Nodes
# ----------------------------
def get_cities_node(state: TravelState):
params = state.get("tool_params", {})
country = params.get("country", "Japan")
state["tool_result"] = f"Cities in {country}: Tokyo, Kyoto, Osaka"
return state
def get_hotels_node(state: TravelState):
params = state.get("tool_params", {})
city = params.get("city", "Kyoto")
state["tool_result"] = f"Hotels in {city}: The Thousand Kyoto, Hoshinoya Kyoto"
return state
# ----------------------------
# LLM Setup
# ----------------------------
llm = ChatOpenAI(model="gpt-4o-mini", temperature=0)
# ----------------------------
# Router Node
# ----------------------------
def router_node(state: TravelState):
user_input = state["user_input"]
system_prompt = """
You are a travel assistant.
Choose which tool (if any) to use:
- get_cities if user mentions a country (like Japan, France)
- get_hotels if user mentions a city (like Kyoto, Tokyo)
Otherwise reply directly.
Respond as JSON:
{"next_tool": "get_cities" | "get_hotels" | "none", "params": {...}}
"""
result = llm.invoke([
{"role": "system", "content": system_prompt},
{"role": "user", "content": user_input},
])
try:
parsed = json.loads(result.content.strip())
except Exception:
parsed = {"next_tool": "none", "params": {}}
state["next_tool"] = parsed["next_tool"]
state["tool_params"] = parsed["params"]
return state
# ----------------------------
# Respond Node
# ----------------------------
def respond_node(state: TravelState):
if state["tool_result"]:
# Summarize or contextualize tool output
response = llm.invoke([
{"role": "system", "content": "Summarize the tool result for the user."},
{"role": "user", "content": state["tool_result"]}
])
print("🤖", response.content)
else:
# No tool used — direct answer
response = llm.invoke([
{"role": "system", "content": "You are a travel expert."},
{"role": "user", "content": state["user_input"]}
])
print("🤖", response.content)
return state
# ----------------------------
# Build LangGraph
# ----------------------------
workflow = StateGraph(TravelState)
# Add nodes
workflow.add_node("router", router_node)
workflow.add_node("get_cities", get_cities_node)
workflow.add_node("get_hotels", get_hotels_node)
workflow.add_node("respond", respond_node)
# Entry point
workflow.set_entry_point("router")
# Conditional routing from router
workflow.add_conditional_edges(
"router",
lambda state: state["next_tool"],
{
"get_cities": "get_cities",
"get_hotels": "get_hotels",
"none": "respond"
}
)
# After tools, go to respond
workflow.add_edge("get_cities", "respond")
workflow.add_edge("get_hotels", "respond")
# End after respond
workflow.add_edge("respond", END)
# Memory
memory = MemorySaver()
graph = workflow.compile(checkpointer=memory)
# ----------------------------
# Run Example
# ----------------------------
queries = [
{"user_input": "Show me cities in Japan."},
{"user_input": "Find me hotels in Kyoto."},
{"user_input": "When is the best time to visit Japan?"}
]
for q in queries:
print(f"\n💬 User: {q['user_input']}")
graph.invoke(q)