This guide explains how to import the sample logs into your Kibana/Elasticsearch instance and test the n8n workflow with realistic production scenarios.
The kibana-sample-logs.json file contains 15 log entries that match the alerts in sample-alerts/sample-slack-alerts.md and the code in sample-app/.
Each log entry includes:
- @timestamp: ISO 8601 timestamp
- level: ERROR, WARN, INFO, CRITICAL
- service: Service metadata (name, version, environment)
- message: Human-readable error message
- exception: Stack traces matching actual Java code
- http: Request details (method, path, status, response time)
- host: Server information
- Additional context: Database, memory, disk, external services, etc.
- Open your Kibana instance
- Navigate to Dev Tools (hamburger menu → Management → Dev Tools)
PUT /logs-order-service-2025.11.21
{
"mappings": {
"properties": {
"@timestamp": { "type": "date" },
"level": { "type": "keyword" },
"service": {
"properties": {
"name": { "type": "keyword" },
"version": { "type": "keyword" },
"environment": { "type": "keyword" }
}
},
"message": { "type": "text" },
"exception": {
"properties": {
"type": { "type": "keyword" },
"message": { "type": "text" },
"stacktrace": { "type": "text" }
}
},
"http": {
"properties": {
"method": { "type": "keyword" },
"path": { "type": "keyword" },
"status_code": { "type": "integer" },
"response_time_ms": { "type": "long" }
}
},
"host": {
"properties": {
"name": { "type": "keyword" },
"ip": { "type": "ip" }
}
}
}
}
}# From the sample-logs directory
curl -X POST "https://your-kibana-url:9200/_bulk" \
-H "Content-Type: application/x-ndjson" \
-H "Authorization: ApiKey YOUR_API_KEY" \
--data-binary @kibana-bulk-import.ndjsonOr use the bulk import script (see below).
- Go to Stack Management → Index Patterns
- Click Create index pattern
- Enter pattern:
logs-order-service-* - Select time field:
@timestamp - Click Create index pattern
Run the conversion script:
cd sample-logs
node convert-to-bulk.jsThis creates kibana-bulk-import.ndjson with the proper format:
{"index":{"_index":"logs-order-service-2025.11.21"}}
{...log entry 1...}
{"index":{"_index":"logs-order-service-2025.11.21"}}
{...log entry 2...}
curl -X POST "https://your-kibana-url:9200/_bulk" \
-H "Content-Type: application/x-ndjson" \
-H "Authorization: ApiKey YOUR_API_KEY" \
--data-binary @kibana-bulk-import.ndjsoncurl -X GET "https://your-kibana-url:9200/logs-order-service-2025.11.21/_count" \
-H "Authorization: ApiKey YOUR_API_KEY"Expected response: {"count":15,...}
# Download and install Filebeat
curl -L -O https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-8.11.0-darwin-x86_64.tar.gz
tar xzvf filebeat-8.11.0-darwin-x86_64.tar.gz
cd filebeat-8.11.0-darwin-x86_64Edit filebeat.yml:
filebeat.inputs:
- type: log
enabled: true
paths:
- /path/to/sample-logs/kibana-sample-logs.json
json.keys_under_root: true
json.add_error_key: true
output.elasticsearch:
hosts: ["https://your-kibana-url:9200"]
api_key: "YOUR_API_KEY"
index: "logs-order-service-%{+yyyy.MM.dd}"
setup.ilm.enabled: false./filebeat -e1. Send Slack Alert:
🔴 *[CRITICAL] Database Query Timeout*
*Service:* `order-service`
*Time:* 2025-11-21 14:23:45 UTC
*Details:*
• *Error:* Database query exceeded 30s timeout
• *Stack Trace:* DatabaseTimeoutException at OrderService.java:89
2. Expected AI Response:
The AI should:
- Search Kibana logs for "DatabaseTimeoutException"
- Find matching logs at 14:23:45 UTC
- Search GitHub for "OrderService.java"
- Read the file and find line 89 (or 121 where exception is thrown)
- Provide analysis:
- Root cause: Connection pool exhausted (10/10 connections)
- Code location:
OrderService.java:121insimulateProductionFailures() - Recommendation: Increase pool size, add connection leak detection
1. Send Slack Alert:
🟠 *[HIGH] Payment Gateway Timeout*
*Service:* `order-service`
*Time:* 2025-11-21 14:45:12 UTC
*Details:*
• *Error:* Payment gateway timeout - Stripe API not responding
• *Stack Trace:* PaymentGatewayException at PaymentService.java:67
2. Expected AI Response:
The AI should:
- Search Kibana for "PaymentGatewayException" around 14:45:12
- Find 30s timeout to Stripe API
- Search GitHub for "PaymentService.java"
- Identify line 67 (or 99 where exception is thrown)
- Provide analysis:
- Root cause: Stripe API timeout after 30s
- Code location:
PaymentService.java:99insimulatePaymentGatewayFailures() - Recommendation: Add retry logic, check Stripe status page
1. Send Slack Alert:
🟠 *[HIGH] Inventory Service Unavailable*
*Service:* `order-service`
*Time:* 2025-11-21 15:02:33 UTC
*Details:*
• *Error:* Inventory service returning HTTP 503
• *Stack Trace:* InventoryException at OrderService.java:102
2. Expected AI Response:
The AI should:
- Search Kibana for "InventoryException" at 15:02:33
- Find HTTP 503 from inventory-service
- Search GitHub for "OrderService.java" and "InventoryService"
- Identify circuit breaker pattern
- Provide analysis:
- Root cause: Inventory service DOWN, circuit breaker OPEN
- Code location:
OrderService.java:137andInventoryService.java:45 - Recommendation: Check inventory service health, review fallback behavior
POST /logs-order-service-*/_search
{
"query": {
"bool": {
"must": [
{ "match": { "level": "ERROR" } },
{ "range": { "@timestamp": { "gte": "now-1h" } } }
]
}
},
"size": 20,
"sort": [{ "@timestamp": "desc" }]
}POST /logs-order-service-*/_search
{
"query": {
"bool": {
"must": [
{ "match": { "exception.type": "DatabaseTimeoutException" } }
]
}
}
}POST /logs-order-service-*/_search
{
"query": {
"bool": {
"must": [
{ "match": { "exception.type": "PaymentGatewayException" } }
]
}
}
}POST /logs-order-service-*/_search
{
"query": {
"bool": {
"must": [
{ "range": { "http.response_time_ms": { "gte": 5000 } } }
]
}
},
"sort": [{ "http.response_time_ms": "desc" }]
}{
"method": "POST",
"path": "/api/console/proxy?path=logs-order-service-*/_search&method=POST",
"body": {
"query": {
"bool": {
"must": [
{ "match": { "level": "ERROR" } },
{ "match": { "service.name": "order-service" } },
{ "range": { "@timestamp": { "gte": "2025-11-21T14:00:00Z", "lte": "2025-11-21T15:00:00Z" } } }
]
}
},
"size": 10,
"sort": [{ "@timestamp": "desc" }]
}
}{
"method": "POST",
"path": "/api/console/proxy?path=logs-order-service-*/_search&method=POST",
"body": {
"query": {
"match": {
"exception.type": "DatabaseTimeoutException"
}
},
"size": 5
}
}Solution:
- Check index exists:
GET /logs-order-service-*/_count - Verify index pattern created
- Check time range in Kibana (set to last 30 days)
- Refresh index pattern
Solution:
- Verify index name in query path
- Check timestamp format and range
- Test query directly in Kibana Dev Tools
- Ensure Kibana credentials are correct in
.env
Solution:
- Check AI system prompt includes Kibana instructions
- Verify MCP client is connected (check n8n logs)
- Test execute_kb_api tool manually
- Ensure log timestamps match alert timestamps
✅ Logs imported successfully: 15 documents in index
✅ Index pattern created: logs-order-service-*
✅ Queries return results: Test queries work in Dev Tools
✅ MCP tool works: execute_kb_api returns log data
✅ AI correlates data: AI finds logs AND code for alerts
✅ End-to-end flow: Slack → AI → Kibana + GitHub → Slack response
- Import the sample logs
- Test queries in Kibana Dev Tools
- Test MCP execute_kb_api tool manually
- Send test alerts to Slack
- Verify AI response includes both log and code analysis
- Iterate on system prompt if needed
Your n8n workflow should now be able to analyze production alerts using both Kibana logs and GitHub code! 🎉