Lets make seaprate page for it, and do not forget add it to menu.yaml
Lets make sure we add this OAS api definition, so users can copy it.
You can see that metrics and tracing section commented, and you should link to those docs and say that this example also shows how to configure tracing.
Now look at this file, you can see whats going on. We have kafka (mention it as requirment), and 3 streams. It shows event driver architecture end to end. Lets assume we have some microservice (in this case also implemented as "Worker" stream, doing some background work. And we post some events, we add there correlation ID and also user ID (and it also shows dynamic variable replacement), and then after it was processed by worker, we stream it out, and make sure that messages gets filtered - it also shows mapping processor in action.
Ensure you prepare very descriptive page/tutorial on how it all works, optimised for architects specialized in event driven architectures;
info:
title: streams-demo
version: 1.0.0
openapi: 3.0.3
servers:
- url: http://tyk-gateway:8282/bi/
x-tyk-streaming:
streams:
Worker:
input:
kafka:
addresses:
- localhost:9093
consumer_group: worker
topics:
- jobs
output:
kafka:
addresses:
- localhost:9093
topic: completed
pipeline:
processors:
- mapping: |
root = this.merge({ "result": "bar" })
# metrics:
# prometheus:
# push_interval: 1s
# push_job_name: Worker
# push_url: http://localhost:9091
# tracer:
# jaeger:
# collector_url: http://localhost:14268/api/traces
# tags:
# stream: Worker
in:
input:
http_server:
path: /push-event
ws_path: /ws-out
output:
broker:
outputs:
- kafka:
addresses:
- localhost:9093
topic: jobs
- sync_response: {}
pipeline:
processors:
- mapping: |
root = this
root.user_id = "$tyk_context.request_data_user"
root.job_id = uuid_v4()
# tracer:
# jaeger:
# collector_url: http://localhost:14268/api/traces
# tags:
# stream: in
# metrics:
# prometheus:
# push_interval: 1s
# push_job_name: in
# push_url: http://localhost:9091
out:
input:
kafka:
addresses:
- localhost:9093
consumer_group: $tyk_context.request_data_user
topics:
- completed
output:
http_server:
path: /get-event
ws_path: /ws-in
pipeline:
processors:
- mapping: |
root = if this.user_id != "$tyk_context.request_data_user" {
deleted()
}
# tracer:
# jaeger:
# collector_url: http://localhost:14268/api/traces
# tags:
# stream: out
# metrics:
# prometheus:
# push_interval: 1s
# push_job_name: out
# push_url: http://localhost:9091
security: []
paths: {}
components:
securitySchemes: {}
x-tyk-api-gateway:
info:
name: streams-demo
state:
active: true
internal: false
middleware:
global:
contextVariables:
enabled: true
trafficLogs:
enabled: true
server:
listenPath:
strip: true
value: /bi/
upstream:
proxy:
enabled: false
url: ''
Lets make seaprate page for it, and do not forget add it to menu.yaml
Lets make sure we add this OAS api definition, so users can copy it.
You can see that metrics and tracing section commented, and you should link to those docs and say that this example also shows how to configure tracing.
Now look at this file, you can see whats going on. We have kafka (mention it as requirment), and 3 streams. It shows event driver architecture end to end. Lets assume we have some microservice (in this case also implemented as "Worker" stream, doing some background work. And we post some events, we add there correlation ID and also user ID (and it also shows dynamic variable replacement), and then after it was processed by worker, we stream it out, and make sure that messages gets filtered - it also shows mapping processor in action.
Ensure you prepare very descriptive page/tutorial on how it all works, optimised for architects specialized in event driven architectures;