Infrastructure-as-code and documentation for Colin's homelab — a single Beelink mini-PC running containerised services behind Tailscale and Cloudflare Tunnel, managed by Dokploy.
mise install # Install Python, uv, shellcheck, actionlint, trivy
mise run lint # Lint everything (Python, bash, YAML, Actions)
mise run typecheck # Type-check Python
mise run test # Run 24 pytest tests
mise run ci # All of the above + validate compose filesOn the server:
mise run deploy:all # Deploy all stacks
mise run check:health # Health check all services
mise run check:security # Security posture audit
mise run check:vulnerabilities # Scan images for CVEs
mise run setup # Bootstrap a fresh server| Service | Purpose | Managed By |
|---|---|---|
| Flight Tracker | Real-time aviation dashboard (FastAPI + React) | Dokploy (auto-deploy from GitHub) |
| Cloudflared | Public ingress via Cloudflare Tunnel | Dokploy |
| Home Assistant | Home automation (Bluetooth, mDNS) | Docker Compose (stacks/home-assistant/) |
| MQTT (Mosquitto) | Message broker for HA sensors | Docker Compose (stacks/mqtt/) |
| Grafana | Dashboards — metrics, logs, alerts | Docker Compose (stacks/observability/) |
| Prometheus | Metrics storage (30d retention) | Docker Compose (stacks/observability/) |
| Loki | Log aggregation (30d retention) | Docker Compose (stacks/observability/) |
| Grafana Alloy | Unified collector (host + container metrics/logs) | Docker Compose (stacks/observability/) |
| CrowdSec | Collaborative IDS + firewall bouncer | Docker Compose (stacks/crowdsec/) |
| Dokploy | PaaS dashboard, logs, metrics, alerts | Docker Swarm (self-managed) |
- Beelink mini-PC — Ubuntu 24.04 LTS, ~16 GB RAM, 466 GB SSD (10% used)
- Network — LAN, Tailscale mesh, IPv6 via ISP (no public IPv4)
Internet
│
├─ Cloudflare Tunnel ──→ Public services (flight-tracker API)
│
└─ Tailscale ──→ Admin access (SSH, Dokploy, Home Assistant)
│
└─ ACLs: desktop=full, mobile=HA only, CI=Dokploy only
Push to main (flight-tracker repo)
→ GitHub Actions CI (lint, test, build)
→ Tailscale GitHub Action joins tailnet as tag:ci
→ curl → Dokploy API triggers rebuild + deploy
→ Discord notification on success/failure
| Tool | Purpose |
|---|---|
| mise | Task runner + tool version manager |
| uv | Python package management |
| ruff | Python lint + format |
| ty | Python type checking |
| pydantic | Structured models for audits |
| trivy | Docker image CVE scanning |
| Renovate | Automated dependency PRs |
Grafana (:3001) ← Prometheus (metrics) ← Alloy (host + container scraping)
← Loki (logs) ← Alloy (Docker log collection)
← CrowdSec (security metrics)
- Dashboards: Host Overview, Container Overview, Security (pre-provisioned)
- Alerting: CPU >80%, RAM >90%, Disk >80% → private Discord channel
- Security: CrowdSec IDS with collaborative threat intel + UFW firewall bouncer
- External: Healthchecks.io heartbeat (alerts on full server outage)
UFW firewall active (deny all except Tailscale), CrowdSec IDS with collaborative threat intel and UFW firewall bouncer, automatic security patches enabled, Tailscale ACLs enforcing least-privilege access. Full audit and hardening details in security.md (encrypted — clone + git-crypt unlock to read).
All docs are plain markdown — open docs/ as an Obsidian vault if you prefer.
- Requirements — goals, problems, and status (the "north star")
- Security — audit findings, hardening status, periodic checklist (encrypted)
- Network — topology, interfaces, traffic monitoring plan (encrypted)
- ADR-001: Dokploy — why Dokploy, what was considered, feature comparison
- ADR-002: Repo Tooling — why mise + uv + Python
- ADR-003: Observability — why GPAL stack, CrowdSec, Healthchecks.io
- Migration: Dokploy — completed migration from Dockge/Tugtainer (reference)
- Deploying Services — how to add new services (Dokploy Compose or local stack)