|
| 1 | +--- |
| 2 | +title: Profiling |
| 3 | +weight: 8 |
| 4 | +BookToC: true |
| 5 | +--- |
| 6 | + |
| 7 | +# Profiling PAC Components |
| 8 | + |
| 9 | +Pipelines-as-Code components embed the [Knative profiling server](https://pkg.go.dev/knative.dev/pkg/profiling), |
| 10 | +which exposes Go runtime profiling data via the standard `net/http/pprof` endpoints. |
| 11 | +Profiling is useful for diagnosing CPU hot-spots, memory growth, goroutine leaks, and |
| 12 | +other performance issues. |
| 13 | + |
| 14 | +## How It Works |
| 15 | + |
| 16 | +Each PAC component starts an HTTP server on port **8008** (the default Knative profiling |
| 17 | +port, overridable with the `PROFILING_PORT` environment variable). When profiling is |
| 18 | +enabled the following endpoints are active: |
| 19 | + |
| 20 | +| Endpoint | Description | |
| 21 | +| --- | --- | |
| 22 | +| `/debug/pprof/` | Index of all available profiles | |
| 23 | +| `/debug/pprof/heap` | Heap memory allocations | |
| 24 | +| `/debug/pprof/goroutine` | All current goroutines | |
| 25 | +| `/debug/pprof/profile` | 30-second CPU profile | |
| 26 | +| `/debug/pprof/trace` | Execution trace | |
| 27 | +| `/debug/pprof/cmdline` | Process command line | |
| 28 | +| `/debug/pprof/symbol` | Symbol lookup | |
| 29 | + |
| 30 | +When profiling is disabled the server still listens but returns `404` for every request. |
| 31 | + |
| 32 | +## Enabling Profiling |
| 33 | + |
| 34 | +### Watcher |
| 35 | + |
| 36 | +The **watcher** (`pipelines-as-code-watcher`) uses Knative's `sharedmain` framework, |
| 37 | +which watches the `config-observability` ConfigMap and toggles profiling **without a |
| 38 | +restart**. |
| 39 | + |
| 40 | +**`PAC_DISABLE_HEALTH_PROBE=true` must be set on the watcher, otherwise a port conflict |
| 41 | +on 8080 will cause the profiling server to shut down:** |
| 42 | + |
| 43 | +```bash |
| 44 | +kubectl set env deployment/pipelines-as-code-watcher \ |
| 45 | + -n pipelines-as-code \ |
| 46 | + PAC_DISABLE_HEALTH_PROBE=true |
| 47 | +``` |
| 48 | + |
| 49 | +Then enable profiling via the ConfigMap: |
| 50 | + |
| 51 | +```bash |
| 52 | +kubectl patch configmap pipelines-as-code-config-observability \ |
| 53 | + -n pipelines-as-code \ |
| 54 | + --type merge \ |
| 55 | + -p '{"data":{"profiling.enable":"true"}}' |
| 56 | +``` |
| 57 | + |
| 58 | +To disable profiling: |
| 59 | + |
| 60 | +```bash |
| 61 | +kubectl patch configmap pipelines-as-code-config-observability \ |
| 62 | + -n pipelines-as-code \ |
| 63 | + --type merge \ |
| 64 | + -p '{"data":{"profiling.enable":"false"}}' |
| 65 | +``` |
| 66 | + |
| 67 | +The watcher picks up the ConfigMap change immediately without a restart. |
| 68 | + |
| 69 | +### Webhook |
| 70 | + |
| 71 | +The **webhook** (`pipelines-as-code-webhook`) also uses `sharedmain` and supports |
| 72 | +dynamic toggling via the same ConfigMap. Unlike the watcher, the webhook does not run |
| 73 | +its own health probe server, so `PAC_DISABLE_HEALTH_PROBE` is not required. |
| 74 | + |
| 75 | +The webhook deployment does not set `CONFIG_OBSERVABILITY_NAME` by default, so it |
| 76 | +falls back to looking for a ConfigMap named `config-observability`, which does not |
| 77 | +exist in the PAC namespace. Set the environment variable first: |
| 78 | + |
| 79 | +```bash |
| 80 | +kubectl set env deployment/pipelines-as-code-webhook \ |
| 81 | + -n pipelines-as-code \ |
| 82 | + CONFIG_OBSERVABILITY_NAME=pipelines-as-code-config-observability |
| 83 | +``` |
| 84 | + |
| 85 | +Then use the same `kubectl patch` on the ConfigMap above to enable or disable profiling. |
| 86 | + |
| 87 | +### Controller |
| 88 | + |
| 89 | +The **controller** (`pipelines-as-code-controller`) uses the Knative eventing adapter |
| 90 | +framework. Profiling is configured at startup from the `K_METRICS_CONFIG` environment |
| 91 | +variable and is **not** dynamically reloaded; a pod restart is required after any change. |
| 92 | + |
| 93 | +The `K_METRICS_CONFIG` variable contains a JSON object whose `ConfigMap` field holds |
| 94 | +inline key/value configuration data. To enable profiling, add `"profiling.enable":"true"` |
| 95 | +inside that `ConfigMap` object: |
| 96 | + |
| 97 | +```bash |
| 98 | +# Read the current value first |
| 99 | +kubectl get deployment pipelines-as-code-controller \ |
| 100 | + -n pipelines-as-code \ |
| 101 | + -o jsonpath='{.spec.template.spec.containers[0].env[?(@.name=="K_METRICS_CONFIG")].value}' |
| 102 | +``` |
| 103 | + |
| 104 | +Then patch the Deployment with `profiling.enable` added to the `ConfigMap` field, for example: |
| 105 | + |
| 106 | +```bash |
| 107 | +kubectl set env deployment/pipelines-as-code-controller \ |
| 108 | + -n pipelines-as-code \ |
| 109 | + 'K_METRICS_CONFIG={"Domain":"pipelinesascode.tekton.dev/controller","Component":"pac_controller","PrometheusPort":9090,"ConfigMap":{"name":"pipelines-as-code-config-observability","profiling.enable":"true"}}' |
| 110 | +``` |
| 111 | + |
| 112 | +This triggers a rolling restart of the controller pod. Remove `"profiling.enable":"true"` |
| 113 | +(or set it to `"false"`) and re-apply to disable. |
| 114 | + |
| 115 | +## Accessing Profiles |
| 116 | + |
| 117 | +Port 8008 is not declared in the container spec by default. To make it reachable, patch |
| 118 | +the target Deployment(s) to add the port: |
| 119 | + |
| 120 | +```bash |
| 121 | +for deploy in pipelines-as-code-watcher pipelines-as-code-controller pipelines-as-code-webhook; do |
| 122 | + kubectl patch deployment "$deploy" \ |
| 123 | + -n pipelines-as-code \ |
| 124 | + --type json \ |
| 125 | + -p '[{"op":"add","path":"/spec/template/spec/containers/0/ports/-","value":{"name":"profiling","containerPort":8008,"protocol":"TCP"}}]' |
| 126 | +done |
| 127 | +``` |
| 128 | + |
| 129 | +This triggers a rolling restart of the pod. Once the pod is running, you can access |
| 130 | +the pprof endpoints. |
| 131 | + |
| 132 | +### Using `kubectl port-forward` |
| 133 | + |
| 134 | +The recommended way to access the profiling server is with `kubectl port-forward`. This |
| 135 | +forwards a local port on your machine to the port on the pod, without exposing it to the |
| 136 | +cluster network. |
| 137 | + |
| 138 | +First, get the name of the pod you want to profile. Choose the label that matches the |
| 139 | +component: |
| 140 | + |
| 141 | +```bash |
| 142 | +# Watcher |
| 143 | +export POD_NAME=$(kubectl get pods -n pipelines-as-code \ |
| 144 | + -l app.kubernetes.io/name=watcher \ |
| 145 | + -o jsonpath='{.items[0].metadata.name}') |
| 146 | + |
| 147 | +# Controller |
| 148 | +export POD_NAME=$(kubectl get pods -n pipelines-as-code \ |
| 149 | + -l app.kubernetes.io/name=controller \ |
| 150 | + -o jsonpath='{.items[0].metadata.name}') |
| 151 | + |
| 152 | +# Webhook |
| 153 | +export POD_NAME=$(kubectl get pods -n pipelines-as-code \ |
| 154 | + -l app.kubernetes.io/name=webhook \ |
| 155 | + -o jsonpath='{.items[0].metadata.name}') |
| 156 | +``` |
| 157 | + |
| 158 | +Then, forward a local port to the pod's profiling port: |
| 159 | + |
| 160 | +```bash |
| 161 | +kubectl port-forward -n pipelines-as-code $POD_NAME 8008:8008 |
| 162 | +``` |
| 163 | + |
| 164 | +The pprof index is now available at `http://localhost:8008/debug/pprof/`. |
| 165 | + |
| 166 | +### Changing the profiling port |
| 167 | + |
| 168 | +If port 8008 conflicts with another service, set the `PROFILING_PORT` environment |
| 169 | +variable on the Deployment to use a different port: |
| 170 | + |
| 171 | +```bash |
| 172 | +kubectl set env deployment/pipelines-as-code-watcher \ |
| 173 | + -n pipelines-as-code \ |
| 174 | + PROFILING_PORT=8090 |
| 175 | +``` |
| 176 | + |
| 177 | +Update the `containerPort` in the patch above and your port-forward command to match. |
| 178 | + |
| 179 | +### Capturing profiles with `go tool pprof` |
| 180 | + |
| 181 | +With `kubectl port-forward` running, use `go tool pprof` to analyze profiles directly: |
| 182 | + |
| 183 | +```bash |
| 184 | +# Heap profile |
| 185 | +go tool pprof http://localhost:8008/debug/pprof/heap |
| 186 | + |
| 187 | +# 30-second CPU profile |
| 188 | +go tool pprof http://localhost:8008/debug/pprof/profile |
| 189 | + |
| 190 | +# Goroutine dump |
| 191 | +go tool pprof http://localhost:8008/debug/pprof/goroutine |
| 192 | +``` |
| 193 | + |
| 194 | +### Saving profiles to disk |
| 195 | + |
| 196 | +You can also save profiles to disk for later analysis using `curl`: |
| 197 | + |
| 198 | +```bash |
| 199 | +# Save a heap profile |
| 200 | +curl -o heap-$(date +%Y%m%d-%H%M%S).pb.gz \ |
| 201 | + http://localhost:8008/debug/pprof/heap |
| 202 | + |
| 203 | +# Analyze later - CLI |
| 204 | +go tool pprof heap-<timestamp>.pb.gz |
| 205 | + |
| 206 | +# Analyze later - interactive web UI (opens browser at http://localhost:8009) |
| 207 | +go tool pprof -http=:8009 heap-<timestamp>.pb.gz |
| 208 | +``` |
| 209 | + |
| 210 | +## Security Considerations |
| 211 | + |
| 212 | +The profiling server exposes internal runtime data. Because port 8008 is not declared |
| 213 | +in the container spec by default, access requires an explicit Deployment patch, limiting |
| 214 | +it to users with `deployments/patch` permission in the `pipelines-as-code` namespace. |
| 215 | + |
| 216 | +Do not expose port 8008 via a Service or Ingress in production environments. Disable |
| 217 | +profiling (`profiling.enable: "false"`) when not actively investigating an issue. |
0 commit comments