Complete guide for testing your n8n workflow with the sample Spring Boot application.
This setup includes:
- Spring Boot Order Service - Generates realistic production failures
- Sample Alert Messages - Pre-formatted Slack alerts for testing
- n8n Workflow - Analyzes alerts and provides AI-powered insights
cd sample-app
# Build and run
mvn clean install
mvn spring-boot:runThe service starts on http://localhost:8080
Verify it's running:
curl http://localhost:8080/healthGenerate some traffic to trigger failures:
cd ../sample-alerts
chmod +x test-workflow.sh
./test-workflow.shThis will:
- Create 20+ orders (triggering random failures)
- Process payments (triggering gateway timeouts)
- Test validation errors
- Generate high load
- Open your Slack workspace
- Go to the
#mcp-testingchannel (or your configured channel) - Copy any alert from
sample-alerts/sample-slack-alerts.md - Paste the alert message in Slack
- n8n workflow should trigger automatically
- Check
#n8n-outputchannel for AI analysis
Use curl to test the workflow directly (if you have webhook access):
# Example: Send alert to n8n webhook
curl -X POST "YOUR_N8N_WEBHOOK_URL" \
-H "Content-Type: application/json" \
-d '{
"text": "🔴 *[CRITICAL] Database Query Timeout*\n\n*Service:* `order-service`..."
}'Alert to Test:
🔴 *[CRITICAL] Database Query Timeout*
*Service:* `order-service`
*Environment:* `production`
*Time:* 2025-11-21 14:23:45 UTC
*Details:*
• *Error:* Database query exceeded 30s timeout
• *Connection Pool:* Exhausted (10/10 connections in use)
• *Impact:* Order creation API failing - 45% error rate
Expected AI Response:
- Identifies database connection pool exhaustion
- Suggests immediate restart
- Recommends increasing pool size
- Mentions checking recent deployments
- Provides query optimization tips
Alert to Test:
🟠 *[HIGH] Payment Gateway Timeout*
*Service:* `order-service`
*Environment:* `production`
*Details:*
• *Error:* Payment gateway timeout - Stripe API not responding
• *Timeout:* 30s exceeded
• *Affected Orders:* 23 orders stuck in PAYMENT_PENDING status
Expected AI Response:
- Identifies external service dependency issue
- Suggests checking Stripe status page
- Recommends implementing retry logic
- Mentions circuit breaker pattern
- Estimates customer impact
Alert to Test:
🔴 *[CRITICAL] High Error Rate Detected*
*Service:* `order-service`
*Error Rate:* 52% (260 errors / 500 requests)
*Primary Errors:*
- DatabaseTimeoutException: 45%
- PaymentGatewayException: 15%
Expected AI Response:
- Identifies multiple cascading failures
- Prioritizes database issue as primary cause
- Suggests immediate escalation
- Recommends creating P1 incident
- Provides war room coordination steps
After each test, verify the AI response includes:
- Severity Recognition: Correctly identifies CRITICAL/HIGH/MEDIUM
- Service Identification: Extracts service name (
order-service) - Environment: Recognizes production environment
- Root Cause: Identifies the primary issue
- Impact Assessment: Understands customer/business impact
- Immediate Actions: Provides actionable next steps
- Investigation Steps: Suggests what to check
- Long-term Fixes: Recommends preventive measures
- Related Context: Links to deployments, metrics, etc.
# Get current metrics
curl http://localhost:8080/api/metrics | jq
# Expected output:
{
"service": "order-service",
"totalRequests": 150,
"failedRequests": 45,
"errorRate": "30.00%",
"avgResponseTimeMs": "1250.50",
"memoryUsageMb": 512
}# View logs in real-time
tail -f logs/application.log
# Look for error patterns:
# - DatabaseTimeoutException
# - PaymentGatewayException
# - InventoryExceptionThe application has built-in failure simulation:
- Database Timeout: 10% of requests
- Database Connection Error: 5% of requests
- Inventory Failure: 8% of requests
- Payment Timeout: 15% of payment requests
- Payment Declined: 10% of payment requests
Solution:
# Check if port 8080 is already in use
lsof -i :8080
# Kill existing process if needed
kill -9 <PID>
# Or run on different port
mvn spring-boot:run -Dspring-boot.run.arguments=--server.port=8081Solution: The failures are random. Generate more traffic:
# Run test script multiple times
for i in {1..5}; do
./test-workflow.sh
sleep 2
doneSolution:
- Check Slack channel ID in workflow matches your channel
- Verify Slack credentials are configured
- Check n8n workflow is activated
- Test with simple message first: "test alert"
Solution:
- Ensure alert message includes enough context
- Add more details: service name, error type, metrics
- Include stack traces and error messages
- Provide recent deployment information
Create your own alert messages following this template:
🔴 *[SEVERITY] Alert Title*
*Service:* `service-name`
*Environment:* `production`
*Time:* YYYY-MM-DD HH:MM:SS UTC
*Details:*
• *Error:* Detailed error message
• *Impact:* What is affected
• *Metrics:* Relevant numbers
• *Stack Trace:* Error location
*Context:*
• Recent changes
• Related incidents
Generate sustained load to trigger multiple failures:
# Install Apache Bench
brew install apache-bench # macOS
# Generate 1000 requests with 50 concurrent
ab -n 1000 -c 50 -p order.json -T application/json \
http://localhost:8080/api/ordersWhere order.json contains:
{
"customerId": "CUST-001",
"productId": "PROD-001",
"quantity": 1,
"totalAmount": 99.99
}To send real alerts to Slack from the application:
- Add Slack webhook to application
- Configure alert thresholds
- Implement alert formatter
- Send alerts on exception
Example (add to GlobalExceptionHandler.java):
private void sendSlackAlert(Exception ex) {
String alert = formatAlert(ex);
// Send to Slack webhook
restTemplate.postForEntity(
slackWebhookUrl,
new SlackMessage(alert),
String.class
);
}Your workflow is working correctly if:
- Response Time: AI responds within 5-10 seconds
- Accuracy: Root cause identified correctly >80% of time
- Actionability: Provides specific, actionable steps
- Context: Includes relevant deployment/metric information
- Formatting: Response is well-formatted in Slack
- ✅ Test with all 10 sample alerts
- ✅ Verify AI responses are accurate
- ✅ Add more context to workflow (deployment history, runbooks)
- ✅ Implement action buttons in Slack responses
- ✅ Add incident ticket creation
- ✅ Integrate with monitoring systems (Datadog, Prometheus)
- ✅ Create runbook database for AI to reference
- ✅ Add historical incident matching
- Application Code:
sample-app/ - Sample Alerts:
sample-alerts/sample-slack-alerts.md - Test Script:
sample-alerts/test-workflow.sh - n8n Workflow:
n8n-workflow/slack-2-ai-2-slack.json - API Documentation:
sample-app/README.md