A production-ready, multi-agent AI system for e-commerce operations with human oversight, MySQL persistence, and FastAPI REST integration.
- Overview
- Architecture
- Key Features
- Project Structure
- Core Concepts
- Installation & Setup
- Usage Guide
- API Reference
- Agent Types
- Human-in-the-Loop Workflow
- Database Design
- Configuration
- Examples
- Advanced Topics
- Troubleshooting
This project demonstrates a enterprise-grade Human-in-the-Loop (HITL) system for e-commerce customer support using LangGraph, LangChain, and OpenAI's GPT-4o-mini. It showcases:
- ✅ Multi-agent orchestration with conditional routing and fan-out execution
- ✅ Human oversight via graph interrupts and approval workflows
- ✅ MySQL persistence with a custom
MySQLSavercheckpointer - ✅ FastAPI REST API for seamless integration
- ✅ Modular agent design - easily extensible for new agent types
- ✅ Session management with stateful graph execution
- ✅ Error handling & fallbacks including AWS Bedrock region fallbacks
- ✅ Tool-calling patterns for function execution within agent responses
This system handles complex customer service tasks like order tracking, payment processing, refunds, shipping, and delivery management—all with intelligent routing and mandatory human approval for high-risk operations.
┌──────────────────────────────────────────────────────────────────┐
│ USER REQUEST │
└─────────────────────────────┬──────────────────────────────────┘
│
▼
┌─────────────────────┐
│ ROUTER NODE │
│ (LLM + keywords) │
│ Decides next agent │
└──────────┬──────────┘
│
┌──────────────────────┼──────────────────────┐
│ Conditional Fan-Out (can invoke 1 or many) │
▼ ▼ ▼
┌─────────────┐ ┌──────────────┐ ┌──────────────────┐
│ ORDER AGENT │ │ PAYMENT AGT │ │ DELIVERY AGENT │
│ (LLM + Tool)│ │ (LLM + Tool) │ │ (LLM + Tool) │
└──────┬──────┘ └──────┬───────┘ └────────┬─────────┘
│ │ │
▼ ▼ ▼
┌──────────────────────────────────────────────────────┐
│ SHIPPING AGENT & REFUND AGENT │
│ (can execute in parallel) │
└──────────────────────┬───────────────────────────────┘
│
▼
┌─────────────────────┐
│ REVIEW REQUIRED? │
│ (checks flags) │
└────────┬────────┬──┘
YES ◄───┴────┴──► NO
│ │
▼ ▼
┌────────────────┐ ┌─────────────────┐
│ HUMAN REVIEW │ │ SYNTHESIZE │
│ NODE (HITL) │ │ RESPONSE │
│ [INTERRUPT] │ │ │
└────────┬───────┘ └────────┬────────┘
│ │
▼ │
┌────────────────┐ │
│ SYNTHESIZE ├───────────┘
│ RESPONSE │
└────────┬───────┘
│
▼
┌────────────────────────────┐
│ RETURN FINAL RESPONSE │
│ + Session Checkpoint │
└────────────────────────────┘
Each agent follows the same pattern:
Input State
│
▼
┌──────────────────────────────────────────┐
│ Agent Function (e.g., support_agent) │
│ 1. Extract messages from state │
│ 2. Invoke LLM with bound tools │
│ 3. LLM decides which tool to call │
│ 4. Return new message to state │
└──────────────────────────────────────────┘
│
▼
Tool Node (if tool was invoked)
│
├─── Tool Execution (e.g., check_order_status)
│
▼
Tool Result Message
│
▼
Output State (updated with new messages)
- 5+ configurable agents (order, payment, delivery, shipping, refund)
- Each agent is an independent LLM node with specific tools
- Agents can run in parallel or sequentially
- Easy to extend with new agent types
- Router node analyzes user input and decides which agents to invoke
- Can route to single agent or multiple agents simultaneously
- Uses both keyword matching and LLM-based reasoning
- Supports conditional routing based on state flags
- Interrupts before human review node to pause graph execution
- Human approval/rejection for sensitive operations (refunds, high-value orders)
- Feedback loop to agents based on human decisions
- Session persistence - user can resume interrupted workflows
- Audit trail of all human interventions
- Thread-safe MySQL persistence (LangGraph has no built-in MySQL support)
- Stores graph snapshots/checkpoints for recovery
- Supports multi-turn conversations with session continuity
- Production-ready connection pooling
- POST
/chat- Send user message, get response - POST
/human-review- Submit human approval/rejection - GET
/status/{session_id}- Get current session state - GET
/health- Health check endpoint - Swagger UI at
/docsfor API exploration
- LangChain tool decorators for defining agent capabilities
- Tool node for executing selected tools
- Support for tool arguments and return values
- Easy to add new tools (order tracking, payment processing, etc.)
- Pydantic
AgentStatefor type-safe state - Hierarchical results from each agent type
- Conversation history with full message thread
- Session IDs for multi-turn sessions
- AWS Bedrock with multiple region fallbacks
- Primary and secondary model configuration
- Graceful degradation on failures
langgraph_human_in_the_loop_HITL/
│
├── 📄 README.md # This file
├── 📄 requirements.txt # pip dependencies
├── 📄 pyproject.toml # Python project configuration
├── 🐍 main.py # Entry point example
├── 🐍 graph.py # Graph construction + CLI
├── 🐍 custom_agents.py # Agent definitions
├── 🐍 human_loop.py # HITL workflow & LLM factory
├── 🐍 tools.py # Tool implementations
├── 🐍 api_run.py # FastAPI runner
├── 📄 before_refund.md # Refund workflow docs
├── 🐍 mysql_db_test.py # MySQL connection testing
│
└── 📁 langgraph_human_loop/ # Main application package
├── 🐍 __init__.py
├── 🐍 state.py # AgentState + TypeDicts
├── 🐍 agents.py # 5+ agent implementations
├── 🐍 router.py # Router node + routing logic
├── 🐍 human_loop.py # Human review + synthesis
├── 🐍 graph.py # Graph builder + structure
├── 🐍 api.py # FastAPI endpoints
├── 🐍 support_agent.py # Support-specific logic
├── 🐍 chatbot_with_hitl.py # Chatbot integration
├── 🐍 opensearch_google_adk.py # Search backend integration
├── 🐍 hh.py # Helper utilities
├── 📄 README.md
├── 📄 requirements.txt
│
└── 📁 human_loop_fastapi/ # FastAPI submodule
├── 🐍 api_run.py
├── 🐍 custom_agents.py
├── 🐍 graph.py
├── 🐍 human_loop.py
├── 🐍 tools.py
├── 📄 before_refund.md
├── 🐍 mysql_db_test.py
│
└── 📁 temps/ # Temporary/test files
├── 🐍 custom_agents.py
└── 🐍 graph.py
Agents are the decision-makers in the graph. Each agent:
- Takes the current state as input
- Uses an LLM to decide what to do
- May call tools (check order, process payment, etc.)
- Returns updated state with new messages
Example: order_agent checks order status and updates state.order_result
Routing directs user requests to the right agent(s):
- Router node: Analyzes
user_inputand setsnext_agents - Conditional edges: Use router's decision to fan-out to agents
- Multi-agent routing: Can invoke 0, 1, or multiple agents
Example: "I want a refund" → routes to ["order_agent", "refund_agent"]
Interrupts pause graph execution waiting for human input:
approval = interrupt({
"type": "refund_approval",
"order_id": order_id,
"amount": amount,
})
# Graph pauses here until human provides approval
if approval:
# Continue with refundState is the single source of truth flowing through the graph:
- Input:
user_input,session_id, conversation history - Processing:
next_agents,current_agent, router reasoning - Output: Results from each agent type,
final_response - HITL:
requires_human_review,human_approved,human_feedback
Checkpointing saves graph state at each step:
- Enables resuming interrupted flows
- Provides audit trail
- Stores messages and agent outputs
- MySQL checkpointer: Custom implementation for DB persistence
- Python 3.11+
- MySQL 8.0+ (for MySQL persistence)
- OpenAI API key
- (Optional) AWS Bedrock credentials for fallback models
cd langgraph_human_in_the_loop_HITL
python -m venv .venv
source .venv/bin/activate # On Windows: .venv\Scripts\activatepip install -r requirements.txtCore dependencies:
langgraph>=0.2.0- Graph orchestrationlangchain>=0.3.0- LLM chains and toolslangchain-openai>=0.2.0- OpenAI integrationlangchain-core>=0.3.0- Core abstractionsopenai>=1.0.0- Direct OpenAI APIpython-dotenv>=1.0.0- Environment managementpydantic>=2.0.0- Data validationfastapi- REST API (add if needed)uvicorn- ASGI server (add if needed)pymysql- MySQL driver (add if needed)
Create .env file in project root:
# OpenAI Configuration
OPENAI_API_KEY=sk-...your-key...
OPENAI_MODEL=gpt-4o-mini
# MySQL Configuration (if using MySQL checkpointer)
MYSQL_HOST=localhost
MYSQL_PORT=3306
MYSQL_USER=root
MYSQL_PASSWORD=password
MYSQL_DATABASE=langgraph_db
# AWS Bedrock (optional, for fallbacks)
AWS_REGION=us-east-1
AWS_DEFAULT_REGION=us-east-1# Create database
mysql -u root -p
CREATE DATABASE langgraph_db CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci;
exit
# Test connection
python mysql_db_test.pyThe MySQLSaver will auto-create required tables on first use.
# Start interactive chat
python graph.py
# Example interaction:
# > You: "I want to check my order status"
# [Router decides to invoke: order_agent]
# [Order Agent checks order status]
# > Assistant: "Your order #12345 is currently in transit..."python graph.py --demo
# Runs pre-defined test scenarios
# Useful for testing without user input# Start API server
cd langgraph_human_loop
uvicorn api:app --reload --port 8000
# In another terminal, test endpoints:
# 1. Health check
curl http://localhost:8000/health
# 2. Send message
curl -X POST http://localhost:8000/chat \
-H "Content-Type: application/json" \
-d '{
"message": "Check my order status",
"session_id": "user-123"
}'
# Response:
{
"session_id": "user-123",
"response": "Your order is in transit...",
"agents_invoked": ["router", "order_agent"],
"requires_human_review": false,
"processing_complete": true
}
# 3. Submit human review (if required)
curl -X POST http://localhost:8000/human-review \
-H "Content-Type: application/json" \
-d '{
"session_id": "user-123",
"approved": true,
"feedback": "Approved - customer verified"
}'
# 4. Get session status
curl http://localhost:8000/status/user-123
# 5. Access Swagger UI
# Open browser → http://localhost:8000/docsfrom langgraph_human_loop.graph import build_graph
from langgraph_human_loop.state import AgentState
from langgraph.checkpoint.memory import MemorySaver
# Build graph with memory checkpointer
checkpointer = MemorySaver()
graph = build_graph(checkpointer=checkpointer)
# Create initial state
state = AgentState(
user_input="I want to return my order",
session_id="session-001"
)
# Execute graph
config = {"configurable": {"thread_id": "session-001"}}
result = graph.invoke(state, config=config)
print(f"Final response: {result.final_response}")
print(f"Agents used: {result.completed_agents}")Request:
{
"message": "string (required)",
"session_id": "string (optional, auto-generated if omitted)"
}Response:
{
"session_id": "string",
"response": "string (final synthesized response)",
"agents_invoked": ["string (agent names)"],
"requires_human_review": "boolean",
"human_review_reason": "string (if human review required)",
"processing_complete": "boolean"
}Status Codes:
200 OK- Successfully processed400 Bad Request- Invalid input500 Internal Server Error- Graph execution error
Request:
{
"session_id": "string (required)",
"approved": "boolean (required)",
"feedback": "string (optional)"
}Response:
{
"session_id": "string",
"decision": "Approved|Rejected",
"final_response": "string"
}Response:
{
"session_id": "string",
"current_agent": "string | null",
"completed_agents": ["string"],
"requires_human_review": "boolean",
"conversation_history": [
{
"role": "user | assistant",
"content": "string"
}
],
"order_result": {...},
"payment_result": {...},
"delivery_result": {...},
"refund_result": {...}
}Response:
{
"status": "ok",
"service": "E-Commerce Multi-Agent System"
}- Purpose: Analyzes user input and decides routing
- Input:
user_input, conversation history - Output:
next_agents(list of agents to invoke) - Logic: Keyword matching + optional LLM reasoning
def router(state):
msg = state["messages"][-1].content.lower()
if "refund" in msg:
return ["refund_agent"]
elif "track" in msg or "shipping" in msg:
return ["order_agent", "delivery_agent"]
else:
return ["support_agent"]- Purpose: Order lookup and status checking
- Tools:
check_order_status(order_id) - Output:
order_result(OrderResult type) - Example Flow:
- User: "Track order #12345"
- LLM calls:
check_order_status("12345") - Returns: Order details, tracking info
- Purpose: Payment status, declined card handling
- Tools: Process payment, check payment status
- Output:
payment_result(PaymentResult type)
- Purpose: Delivery tracking, ETA estimation
- Tools: Get delivery status, location tracking
- Output:
delivery_result(DeliveryResult type)
- Purpose: Shipping method selection, carrier management
- Tools: Check shipping options, generate tracking
- Output:
shipping_result(ShippingResult type)
- Purpose: Refund processing with human approval
- Tools:
process_refund(order_id, amount) - Workflow:
- Analyze refund request
- Interrupt: Request human approval
- Wait for human decision
- If approved: Process refund
- Return result
def refund_agent(state):
order_id = state.get("order_id")
amount = state.get("amount")
# Pause for human approval
approval = interrupt({
"type": "refund_approval",
"order_id": order_id,
"amount": amount,
})
if approval:
result = process_refund(order_id, amount)
return {"refund_result": result}
else:
return {"refund_result": RefundResult(status="rejected")}User Request
│
▼
[Router analyzes → picks agents]
│
▼
[Agents execute with tools]
│
▼
[Agent outputs reviewed]
│
├─── Does output require human approval? NO ─────┐
│ │
YES │
│ │
▼ ▼
[Interrupt triggered] [Synthesize final response]
[Graph pauses at human_review node] │
[Session persisted to DB] │
│ │
▼ │
[API returns to client] │
[Client waits for human decision] │
│ │
▼ │
[Human reviews via /human-review endpoint] │
[Decision passed back to graph] │
│ │
▼ │
[Resume execution with approval] │
[Re-invoke agents with human feedback] │
│ │
└──────────────────────────────────────────┘
│
▼
[Final synthesized response]
[Return to client]
Flow 1: Graph Execution (Interrupted)
# Request comes in
user_input: "I want a refund for order #999"
session_id: "sess-001"
# Step 1: Router analyzes
→ Identifies refund request
→ Sets next_agents = ["refund_agent"]
# Step 2: Refund Agent executes
→ Validates refund eligibility
→ Calls interrupt() with approval request
→ GRAPH PAUSES HERE
# Step 3: Graph persisted
→ MySQLSaver stores checkpoint
→ Current state with all intermediate values saved
→ Returns to API with requires_human_review=TrueFlow 2: Human Review (Browser/Mobile)
# Human sees pending approval in UI
{
"type": "refund_approval",
"order_id": "999",
"amount": 49.99,
"reason": "Customer requested return"
}
# Human clicks "Approve"
POST /human-review
{
"session_id": "sess-001",
"approved": true,
"feedback": "Verified customer identity"
}Flow 3: Graph Resumes
# Backend resumes graph from checkpoint
→ Re-invokes refund_agent with approval=True
→ Continues to synthesize response
→ Returns final result
{
"session_id": "sess-001",
"response": "Refund of $49.99 processed successfully",
"agents_invoked": ["router", "refund_agent"],
"requires_human_review": false,
"processing_complete": true
}The MySQLSaver creates two tables:
CREATE TABLE checkpoints (
thread_id VARCHAR(128) NOT NULL,
checkpoint_id VARCHAR(128) NOT NULL,
parent_checkpoint_id VARCHAR(128),
checkpoint LONGBLOB NOT NULL, -- Pickled graph state
metadata LONGBLOB NOT NULL, -- JSON metadata
PRIMARY KEY (thread_id, checkpoint_id)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4;Columns:
thread_id- Session IDcheckpoint_id- Unique checkpoint IDparent_checkpoint_id- Previous checkpoint (for history)checkpoint- Serialized graph statemetadata- Metadata (timestamps, tags)
CREATE TABLE checkpoint_writes (
thread_id VARCHAR(128) NOT NULL,
checkpoint_id VARCHAR(128) NOT NULL,
task_id VARCHAR(128) NOT NULL,
idx INT NOT NULL,
channel VARCHAR(256) NOT NULL, -- State channel name
value LONGBLOB, -- Serialized value
PRIMARY KEY (thread_id, checkpoint_id, task_id, idx)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4;Columns:
thread_id- Session IDcheckpoint_id- Checkpoint referencetask_id- Task/step identifierchannel- State field name (e.g., "messages", "order_result")value- Serialized value
Agent executes
│
▼
State updated
│
├─── Save to MySQL
│ ├─ Insert into checkpoints
│ └─ Insert into checkpoint_writes
│
▼
Next agent executes
│
(repeat)
# OpenAI
OPENAI_API_KEY=sk-...
OPENAI_MODEL=gpt-4o-mini
# MySQL (if using MySQLSaver)
MYSQL_HOST=localhost
MYSQL_PORT=3306
MYSQL_USER=root
MYSQL_PASSWORD=password
MYSQL_DATABASE=langgraph_db
# AWS Bedrock (for fallback models)
AWS_REGION=us-east-1
AWS_ACCESS_KEY_ID=...
AWS_SECRET_ACCESS_KEY=...
# FastAPI
API_HOST=0.0.0.0
API_PORT=8000
API_DEBUG=falseGraph Options: (in graph.py)
# With memory checkpointer
graph = build_graph(checkpointer=MemorySaver())
# With MySQL checkpointer
from langgraph.graph import MySQLSaver
mysql_saver = MySQLSaver.from_conn_string(
"mysql+pymysql://user:pass@localhost:3306/langgraph_db"
)
graph = build_graph(checkpointer=mysql_saver)
# Interrupt before human review
graph = builder.compile(
checkpointer=checkpointer,
interrupt_before=["human_review"]
)User Input:
"Can you check the status of my order?"
Graph Execution:
1. Router analyzes input
→ Identifies "order" keyword
→ Routes to: [order_agent]
2. Order Agent executes
→ LLM decides to call check_order_status tool
→ Returns: "Your order #12345 is in transit"
3. Synthesizer creates final response
→ No human review needed
→ Returns to user
API Response:
{
"session_id": "sess-123",
"response": "Your order #12345 is in transit. Expected delivery: March 10",
"agents_invoked": ["router", "order_agent"],
"requires_human_review": false,
"processing_complete": true
}User Input:
"I want to return and get a refund for order #999"
Graph Execution (Part 1):
1. Router analyzes input
→ Identifies "refund" keyword
→ Routes to: [order_agent, refund_agent]
2. Order Agent executes
→ Validates order exists
→ Returns order details
3. Refund Agent executes
→ Calculates refund amount: $49.99
→ Checks if customer eligible
→ **INTERRUPT**: Requests human approval
→ Graph pauses, state saved to MySQL
API Response (Intermediate):
{
"session_id": "sess-124",
"response": null,
"agents_invoked": ["router", "order_agent"],
"requires_human_review": true,
"human_review_reason": "Refund of $49.99 requires approval",
"processing_complete": false
}Human Review:
Human reviews in UI/Dashboard
Sees: "Approve refund of $49.99 for order #999?"
Clicks: "Approve"
POST /human-review
{
"session_id": "sess-124",
"approved": true,
"feedback": "Order eligible - verified customer"
}
Graph Execution (Part 2):
4. Resume from checkpoint
→ Refund Agent continues with approval=true
→ Calls process_refund tool
→ Returns: "Refund processed successfully"
5. Synthesizer creates final response
→ "Refund of $49.99 has been processed to your original payment method"
Final API Response:
{
"session_id": "sess-124",
"response": "Refund of $49.99 has been processed successfully",
"agents_invoked": ["router", "order_agent", "refund_agent"],
"requires_human_review": false,
"processing_complete": true
}User Input:
"I need to cancel my order and get a full refund, and check why my payment was declined"
Graph Execution:
1. Router analyzes
→ Identifies: cancel, refund, payment declined
→ Routes to: [order_agent, refund_agent, payment_agent]
2. Parallel Execution
├─ Order Agent
│ └─ check_order_status("order-id") → In preprocessing
│
├─ Payment Agent
│ └─ check_payment("transaction-id") → Declined - card expired
│
└─ Refund Agent
└─ interrupt() → Requests approval for $X refund
3. Join all results
→ All agents done
4. Should review?
→ YES (refund_agent interrupted)
→ Route to human_review
5. Human approves refund
6. Synthesize final response
→ "Order cancelled. Refund of $X approved and processing.
Payment issue: Your card expired on 03/2024.
Update payment method in account settings."
Add new tools to extend agent capabilities:
# tools.py
from langchain_core.tools import tool
@tool
def check_inventory(product_id: str) -> str:
"""Check if a product is in stock."""
# Your implementation
return f"Product {product_id}: 45 units in stock"
# Add to agent
tools = [
current_time,
check_order_status,
process_refund,
check_inventory, # New tool
]
llm_with_tools = llm.bind_tools(tools)# agents.py
def inventory_agent(state):
"""Custom agent for inventory management."""
messages = state.get("messages", [])
tools = [check_inventory, update_stock]
llm_with_tools = llm.bind_tools(tools)
response = llm_with_tools.invoke(messages)
return {"messages": [response]}
# Add to graph
builder.add_node("inventory_agent", inventory_agent)def advanced_router(state):
"""More sophisticated routing based on state inspection."""
msg = state["messages"][-1].content.lower()
agents = []
# Analyze message for keywords
if any(word in msg for word in ["refund", "return", "cancel"]):
agents.append("refund_agent")
if any(word in msg for word in ["payment", "charge", "billing"]):
agents.append("payment_agent")
state["requires_human_review"] = True # Flag for approval
if any(word in msg for word in ["track", "delivery", "shipping"]):
agents.extend(["order_agent", "delivery_agent"])
return agentsdef should_review(state):
"""Determine if human review is needed."""
# High-value refunds
if state.get("refund_result") and state.refund_result.amount > 100:
return "human_review"
# VIP customer disputes
if state.get("is_vip_customer") and state.get("has_dispute"):
return "human_review"
# Default: no review needed
return "synthesize"# human_loop.py
from langchain_community.chat_models import ChatLiteLLM
def llm_factory():
primary = ChatLiteLLM(
model="bedrock/anthropic.claude-3-sonnet-20240229-v1:0",
temperature=0.2,
aws_region_name="us-east-1"
)
fallbacks = [
ChatLiteLLM(
model="bedrock/anthropic.claude-3-sonnet-20240229-v1:0",
aws_region_name="us-west-2"
),
ChatLiteLLM(
model="bedrock/anthropic.claude-3-sonnet-20240229-v1:0",
aws_region_name="eu-west-1"
),
]
return primary.with_fallbacks(fallbacks)Problem: Graph pauses but never receives human response.
Solution:
- Ensure client calls
/human-reviewendpoint - Check session ID matches between
/chatand/human-review - Verify graph was compiled with
interrupt_before=["human_review"]
# Correct compilation
graph = builder.compile(
checkpointer=checkpointer,
interrupt_before=["human_review"] # IMPORTANT
)Problem: Connection refused or Access denied
Solution:
# Check MySQL is running
mysql -u root -p -e "SELECT 1"
# Verify connection string
MYSQL_HOST=localhost # Not 127.0.0.1 on some systems
MYSQL_PORT=3306
MYSQL_USER=root
MYSQL_PASSWORD=...
MYSQL_DATABASE=langgraph_db
# Test connection
python mysql_db_test.pyProblem: RateLimitError from OpenAI API
Solution:
- Add retry logic with exponential backoff
- Use lower temperature models (gpt-4o-mini)
- Implement request queuing
from langchain_core.utils.retry import retry_if_exception_type
from langgraph.prebuilt import create_react_agent
# LangGraph automatically retries on transient errors
llm = ChatOpenAI(
model="gpt-4o-mini",
rate_limit_per_minute=3000,
)Problem: pickle errors when saving to MySQL
Solution:
- Ensure all state fields are picklable
- Use Pydantic models for type safety
- Avoid storing file handles or threads
# Good: Serializable types
class MyState(BaseModel):
user_id: str
amount: float
timestamp: datetime # Uses isoformat for JSON
# Bad: Non-serializable
class BadState(BaseModel):
db_connection: Connection # Can't pickle
file_handle: IO # Can't pickleProblem: LLM not calling tools or wrong tool invoked
Solution:
# Ensure tools are properly bound
tools = [tool1, tool2, tool3]
llm_with_tools = llm.bind_tools(tools)
# Verify tool schemas
for tool in tools:
print(tool.name, tool.description)
print(tool.args)
# Use ToolNode to execute
from langgraph.prebuilt import ToolNode
tool_node = ToolNode(tools)Enable Verbose Logging:
import logging
logging.basicConfig(level=logging.DEBUG)
# LangChain debug mode
import langchain
langchain.debug = TrueInspect State at Each Step:
# Stream graph execution to see state progression
config = {"configurable": {"thread_id": "debug-123"}}
for step in graph.stream(state, config=config, stream_mode="values"):
print("Step:", step)
print(f"Agents completed: {step.completed_agents}")
print(f"Requires review: {step.requires_human_review}")Check Checkpoints:
# Query saved checkpoints
cursor = mysql_saver._get_conn().cursor()
cursor.execute(
"SELECT thread_id, checkpoint_id FROM checkpoints LIMIT 10"
)
for row in cursor.fetchall():
print(row)This LangGraph Human-in-the-Loop system provides:
✅ Production-ready multi-agent orchestration
✅ Human oversight for sensitive operations
✅ Persistent checkpoint storage in MySQL
✅ REST API for easy integration
✅ Extensible agent-based architecture
✅ Scalable parallel agent execution
✅ Auditable complete workflow history
Perfect for e-commerce, customer support, financial services, and any domain requiring intelligent workflows with human in-the-loop control.
- LangGraph Documentation
- LangChain Tools
- FastAPI Documentation
- OpenAI API Reference
- MySQL Documentation
This is an educational project. Feel free to fork, modify, and extend for your use cases.
Questions or Issues? Open an issue or check the troubleshooting section above.
Last Updated: March 7, 2026
Version: 1.0.0