Background
Woodpecker service supports multi-region and multi-AZ deployment.
On the write path, the system is topology-aware and can distribute replicas across AZs (e.g., 3-AZ replication).
However, on the read path, the Woodpecker client currently lacks awareness of its own AZ/Region, which leads to:
- Non-topology-aware replica selection
- Unnecessary cross-AZ reads
- Increased latency and network cost
Problem
Even when a same-AZ replica exists, the client may still read from a remote AZ because:
- Client does not know its own AZ/Region
- Replica selection is not locality-aware
Goal
Enable the Woodpecker client to:
- Detect its own AZ/Region (if available)
- Prefer same-AZ replicas for reads
- Reduce cross-AZ traffic while preserving correctness
Proposal
- Client-side Topology Awareness
Client should obtain AZ/Region information with the following priority:
- Environment variables (recommended default)
- Inject from Kubernetes via Downward API (Node labels):
- topology.kubernetes.io/zone
- topology.kubernetes.io/region
- Example:
AVAILABILITY_ZONE=zone-a
CLUSTER_NAME=region-1
- Explicit env (for non-K8s or override)
- Allow user to directly specify:
export AVAILABILITY_ZONE=zone-a
export CLUSTER_NAME=region-1
- Fallback (no topology info)
- If neither env nor config is provided:
- Treat client as topology-unaware
- Use existing replica selection logic (no locality preference)
👉 This ensures:
- Works naturally in Kubernetes environments
- Flexible for non-K8s deployments
- Fully backward compatible
- Read Replica Selection Optimization
Enhance read path selection strategy:
Priority order:
- Same AZ replicas (preferred)
- Same region, different AZ
- Cross-region replicas
Fallback conditions:
- No local AZ replica exists
- Local replica is not ready / unhealthy
- Timeout / retry
- Integration with Existing Replication Strategy
- Align with current multi-AZ (e.g., 3-AZ) replica placement
- Ensure no impact to:
- consistency guarantees
- quorum logic (if applicable)
Acceptance Criteria
- Client can obtain AZ/Region from env or config
- Same-AZ reads are preferred when possible
- Cross-AZ reads are reduced
- Fallback works when topology info is missing
- No regression in correctness or availability
Impact
- Reduced cross-AZ traffic cost
- Lower read latency
- Better alignment with multi-AZ deployment
Notes
- This issue focuses on read path optimization
- Write path (replica placement) is already topology-aware and not in scope
Background
Woodpecker service supports multi-region and multi-AZ deployment.
On the write path, the system is topology-aware and can distribute replicas across AZs (e.g., 3-AZ replication).
However, on the read path, the Woodpecker client currently lacks awareness of its own AZ/Region, which leads to:
Problem
Even when a same-AZ replica exists, the client may still read from a remote AZ because:
Goal
Enable the Woodpecker client to:
Proposal
Client should obtain AZ/Region information with the following priority:
AVAILABILITY_ZONE=zone-a
CLUSTER_NAME=region-1
export AVAILABILITY_ZONE=zone-a
export CLUSTER_NAME=region-1
👉 This ensures:
Enhance read path selection strategy:
Priority order:
Fallback conditions:
Acceptance Criteria
Impact
Notes