This guide covers deploying PodScope to your Kubernetes cluster.
- Kubernetes 1.19+ cluster with kubectl configured
- Helm 3.0+ (for Helm installation)
- Prometheus or VictoriaMetrics accessible from the cluster
- (Optional) Tailscale Operator for external access via Tailscale
- (Optional) Redis for BullMQ queue monitoring
Helm is the easiest way to install and manage PodScope.
# Add the Helm repository
helm repo add podscope https://kadajett.github.io/PodScope/
helm repo update
# Install PodScope
helm install podscope podscope/podscope \
--namespace monitoring \
--create-namespace \
--set config.prometheusUrl=http://prometheus.monitoring.svc.cluster.local:9090Create a values.yaml file:
config:
# Prometheus URL (required)
prometheusUrl: "http://prometheus.monitoring.svc.cluster.local:9090"
# Optional: BullMQ monitoring
redisInstances: "app:redis.default.svc.cluster.local:6379"
# Optional: Victoria Metrics
victoriaMetricsUrl: ""
# Optional: Grafana integration
grafanaUrl: ""
# Enable ingress
ingress:
enabled: true
className: "nginx"
hosts:
- host: podscope.example.com
paths:
- path: /
pathType: Prefix
# Resource limits
resources:
requests:
memory: "512Mi"
cpu: "200m"
limits:
memory: "1Gi"
cpu: "500m"Install with custom values:
helm install podscope podscope/podscope -f values.yaml --namespace monitoring --create-namespaceSee the Helm Chart README for all available configuration options.
Common configurations:
config.prometheusUrl- Prometheus server URL (required)config.redisInstances- Redis instances for BullMQ monitoringingress.enabled- Enable ingress controllertailscale.enabled- Enable Tailscale ingressresources- CPU and memory limits
helm upgrade podscope podscope/podscope -f values.yamlhelm uninstall podscope --namespace monitoringFor users who prefer direct kubectl deployment or need customization beyond Helm.
# Build the image
docker build -t ghcr.io/kadajett/podscope:latest .
# Push to registry
docker push ghcr.io/kadajett/podscope:latestEdit k8s/configmap.yaml to match your environment:
apiVersion: v1
kind: ConfigMap
metadata:
name: podscope-config
namespace: podscope
data:
PROMETHEUS_URL: "http://prometheus.monitoring.svc.cluster.local:9090"
REDIS_INSTANCES: "app:redis.default.svc.cluster.local:6379"
KUBECTL_EXEC_RATE_LIMIT: "10"
LOG_LEVEL: "info"Update k8s/deployment.yaml with your image:
image: ghcr.io/kadajett/podscope:latestIf you need Grafana API access or Redis authentication:
cp k8s/secret.yaml.example k8s/secret.yaml
# Edit k8s/secret.yaml with your credentials
kubectl apply -f k8s/secret.yaml# Apply all manifests
kubectl apply -f k8s/
# Or apply in order:
kubectl apply -f k8s/namespace.yaml
kubectl apply -f k8s/serviceaccount.yaml
kubectl apply -f k8s/configmap.yaml
kubectl apply -f k8s/deployment.yaml
kubectl apply -f k8s/service.yaml
# Optional: Tailscale ingress
kubectl apply -f k8s/tailscale-ingress.yaml# Check pod status
kubectl get pods -n podscope
# View logs
kubectl logs -n podscope -l app=podscope -f
# Test the service
kubectl port-forward -n podscope svc/podscope 3000:80
# Open http://localhost:3000Access via port-forward:
kubectl port-forward svc/podscope 3000:80 -n monitoringConfigure in values.yaml:
ingress:
enabled: true
className: "nginx"
annotations:
cert-manager.io/cluster-issuer: "letsencrypt-prod"
hosts:
- host: podscope.example.com
paths:
- path: /
pathType: Prefix
tls:
- secretName: podscope-tls
hosts:
- podscope.example.comFor secure external access via Tailscale:
tailscale:
enabled: true
hostname: "podscope"
tags: "tag:k8s"Access at: https://podscope.your-tailnet.ts.net
Change service type in values.yaml:
service:
type: LoadBalancer # or NodePort
port: 80PodScope needs to connect to your Prometheus instance:
config:
prometheusUrl: "http://prometheus.monitoring.svc.cluster.local:9090"In-cluster Prometheus: Use the service DNS name External Prometheus: Use the full URL including port
If using VictoriaMetrics instead of Prometheus:
config:
victoriaMetricsUrl: "http://victoriametrics.monitoring.svc.cluster.local:8428"Monitor BullMQ job queues from multiple Redis instances:
config:
redisInstances: "app:redis.default.svc.cluster.local:6379,cache:redis.cache.svc.cluster.local:6379:password"Format: name:host:port or name:host:port:password
Optional integration with Grafana API:
config:
grafanaUrl: "http://grafana.monitoring.svc.cluster.local:80"
secrets:
grafanaApiKey: "your-api-key-here"PodScope requires read-only access to cluster resources. The Helm chart automatically creates:
- ServiceAccount
- ClusterRole (read-only permissions)
- ClusterRoleBinding
Required permissions:
- Read pods, nodes, services, deployments, namespaces
- Read pod logs
- Read configmaps, secrets (optional for advanced features)
To customize RBAC:
rbac:
additionalRules:
- apiGroups: ["custom.io"]
resources: ["customresources"]
verbs: ["get", "list"]PodScope limits kubectl operations to safe, read-only commands:
get,describe,logs,top- Blocks destructive commands and sensitive flags
API endpoints are rate-limited:
config:
kubectlExecRateLimit: "10" # requests per minuteRecommended: Restrict PodScope network access:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: podscope-netpol
spec:
podSelector:
matchLabels:
app.kubernetes.io/name: podscope
policyTypes:
- Egress
egress:
- to:
- namespaceSelector:
matchLabels:
name: monitoring
ports:
- protocol: TCP
port: 9090 # Prometheus
- to:
- namespaceSelector: {}
ports:
- protocol: TCP
port: 443 # Kubernetes APIkubectl describe pod -n monitoring -l app.kubernetes.io/name=podscope
kubectl logs -n monitoring -l app.kubernetes.io/name=podscopeCommon issues:
- Missing Prometheus URL in configuration
- RBAC permissions not created
- Image pull errors
Test connectivity from within the pod:
kubectl exec -it -n monitoring <podscope-pod> -- curl http://prometheus.monitoring.svc.cluster.local:9090/api/v1/status/configAdjust resource limits:
resources:
limits:
memory: "1Gi"Check log level configuration:
config:
logging:
level: "debug" # or info, warn, errorPodScope itself can be monitored! It exposes metrics at /metrics (coming soon).
- Liveness:
GET /(returns 200 if app is running) - Readiness:
GET /(returns 200 if app is ready)
Deploy PodScope in each cluster with unique names:
helm install podscope-prod podscope/podscope \
--set config.prometheusUrl=http://prometheus.prod.svc.cluster.local:9090
helm install podscope-staging podscope/podscope \
--set config.prometheusUrl=http://prometheus.staging.svc.cluster.local:9090Dashboard configurations are stored in browser localStorage. Export/import via the UI settings.
Add custom PromQL queries via the UI. See the Query Library documentation.
- GitHub: https://github.com/Kadajett/PodScope
- Issues: https://github.com/Kadajett/PodScope/issues
- Documentation: https://github.com/Kadajett/PodScope#readme
PodScope is licensed under BSL-1.1 (Business Source License 1.1). See LICENSE.md for details.