This document provides conventions for AI coding agents working on this repository.
For full project context, see .github/copilot-instructions.md.
make manifests generate # After editing *_types.go or kubebuilder markers
make fmt vet lint # Format, vet, lint
make test # Unit + integration tests (envtest)
make helm # Sync CRDs to Helm chartapi/authorization/v1alpha1/ CRD types & webhooks (kubebuilder v4 multi-group)
internal/controller/ Reconcilers (RoleDefinition, BindDefinition)
internal/webhook/ Admission webhook handlers, cert rotation
pkg/ Shared libraries (conditions, SSA, metrics, discovery)
config/ Kustomize overlays (CRDs, RBAC, webhook are auto-generated)
chart/auth-operator/ Helm chart
test/e2e/ Ginkgo E2E tests
- Never edit auto-generated files —
config/crd/bases/,config/rbac/role.yaml,zz_generated.deepcopy.go,chart/auth-operator/crds/. - Never remove
// +kubebuilder:scaffold:*comments. - After editing
*_types.go: Runmake manifests generate docs helm. - Import alias convention: Use descriptive package aliases:
authorizationv1alpha1forapi/authorization/v1alpha1ctrlforsigs.k8s.io/controller-runtimerbacv1fork8s.io/api/rbac/v1
- Error wrapping: Always use
fmt.Errorf("context: %w", err)— neverfmt.Errorf("context: %v", err). - Standard library constants: Use
http.MethodGetnot"GET",rbacv1.GroupNamenot"rbac.authorization.k8s.io". - REUSE compliance: All new files must have SPDX headers or be covered by a glob in
REUSE.toml. - Test patterns: Use Ginkgo/Gomega for controller tests, standard
testingfor unit tests. Target >70% coverage. - Condition management: Use
pkg/conditions.SetCondition()— never set conditions manually on status. - Server-Side Apply: Use
pkg/ssahelpers for RBAC resources — never useUpdate()for managed objects.
make test # Unit + envtest integration
make test-e2e-full # Full E2E (requires kind + Docker)
make test-e2e-helm-full # Helm installation E2EE2E test labels: helm, complex, ha, leader-election, integration, golden, dev
All PRs must pass: golangci-lint, go vet, go mod tidy check, unit tests (envtest), Docker build, Helm lint, govulncheck, Trivy scan, REUSE compliance.
Prompts are in .github/prompts/ and can be invoked by name:
| Prompt | Category | Purpose |
|---|---|---|
| Task Prompts | ||
review-pr |
General | PR checklist (code quality, testing, security, docs) |
add-crd-field |
Task | Step-by-step guide for adding a new CRD field |
helm-chart-changes |
Task | Helm chart modification checklist |
github-pr-management |
Workflow | GitHub PR workflows: review threads, rebasing, squashing, CI checks |
| Code Quality Reviewers | ||
review-go-style |
Lint | golangci-lint v2 compliance: importas, errorlint, godot, revive, goconst, strict lint |
review-concurrency |
Safety | SSA ownership, condition management, cache staleness, webhook timeout, retry-on-conflict |
review-k8s-patterns |
Ops | Error handling, idempotency, conditions via pkg/conditions, structured logging |
review-performance |
Perf | Reconciler efficiency, namespace enumeration, SSA no-op detection, metrics cardinality |
review-integration-wiring |
Wiring | Dead code, unwired fields, SSA apply completeness, RBAC marker→Helm propagation |
| API & Security Reviewers | ||
review-api-crd |
API | CRD schema, backwards compat, webhook validation, SSA apply configuration completeness |
review-security |
Security | RBAC least privilege, privilege escalation prevention, SSA field ownership, DoS protection |
| Documentation & Testing Reviewers | ||
review-docs-consistency |
Docs | Documentation ↔ code alignment: field names, conditions, Helm values, API reference |
review-ci-testing |
Testing | Test coverage, Ginkgo/Gomega patterns, assertion quality, CI workflow alignment |
review-edge-cases |
Testing | Zero/nil/empty values, namespace lifecycle, SSA conflicts, webhook timing, fuzz properties |
review-qa-regression |
QA | RBAC generation regression, condition regression, SSA ownership changes, rollback safety |
| User Experience Reviewers | ||
review-end-user |
UX | End-user experience: platform engineer, cluster admin, security auditor |
Invoke each review prompt in sequence against a code change and collect findings.
The 12 reviewer personas (out of 16 total prompts; the remaining 4 are
task and workflow guides: review-pr, add-crd-field, helm-chart-changes,
github-pr-management) cover every issue class found by automated reviewers
(Copilot, etc.) and more.
Grouped below by the class of bug each persona catches (the table above groups by domain):
Code quality (4 personas):
- Go style catches import alias violations (
authorizationv1alpha1enforcement),%verror wrapping,godotcomment periods,revivenaming - Concurrency catches SSA ownership conflicts, condition management bypasses, stale cache reads
- K8s patterns catches missing context timeouts, non-idempotent reconcilers, condition mis-management
- Performance catches unbounded namespace enumeration, SSA no-op waste, high-cardinality metrics
Correctness (4 personas):
- Integration wiring catches new code that is defined but never called, SSA apply gaps, RBAC drift, PR description ↔ implementation alignment
- API & CRD catches missing validation markers, backwards-compatibility breaks (incl. validation tightening as breaking), SSA completeness
- Edge cases catches namespace lifecycle races, SSA conflicts, zero-value bugs, webhook timing, SSA field ownership edge cases (ForceOwnership wars, GC interactions)
- QA regression catches RBAC generation regressions, condition reason changes, rollback hazards, verification discipline (search codebase before flagging)
Security & documentation (3 personas):
- Security catches privilege escalation via RBAC generation, webhook bypass, DoS vectors, error response sanitization (no internal details in admission responses)
- Docs consistency catches field name mismatches, stale condition references, Helm doc drift
- CI & testing catches coverage gaps, Ginkgo/testify mixing, missing enum cases, golden staleness, verification discipline (search tests before flagging)
User-facing (1 persona):
- End-user catches platform engineer confusion, admin upgrade friction, auditor visibility gaps