What steps did you take and what happened:
I have upgraded lvm-localpv to current latest release (1.8.0).
All pods is fine (running + ready), but there is endless errors in controller log:
[12.01.2026, 11:13:48,819 PM] lvm-provisioner-lvm-localpv-controller-768f47467f-bqj2q csi-provisioner E0112 16:10:31.951647 1 capacity.go:551] "Unhandled Error" err="update CSIStorageCapacity for {segment:0xc000c00168 storageClassName:redis-data}: CSIStorageCapacity.storage.k8s.io \"csisc-cq6n5\" is invalid: metadata.ownerReferences: Invalid value: []v1.OwnerReference{v1.OwnerReference{APIVersion:\"apps/v1\", Kind:\"ReplicaSet\", Name:\"lvm-provisioner-lvm-localpv-controller-74d8dd6565\", UID:\"019fb1f3-bf2f-453a-9c24-e531aae9b657\", Controller:(*bool)(0xc033f801d9), BlockOwnerDeletion:(*bool)(nil)}, v1.OwnerReference{APIVersion:\"apps/v1\", Kind:\"ReplicaSet\", Name:\"lvm-provisioner-lvm-localpv-controller-768f47467f\", UID:\"d4b48962-2cea-4db0-9255-ca209b09a3ee\", Controller:(*bool)(0xc033f801da), BlockOwnerDeletion:(*bool)(nil)}}: Only one reference can have Controller set to true. Found \"true\" in references for ReplicaSet/lvm-provisioner-lvm-localpv-controller-74d8dd6565 and ReplicaSet/lvm-provisioner-lvm-localpv-controller-768f47467f" logger="UnhandledError"
[12.01.2026, 11:13:48,819 PM] lvm-provisioner-lvm-localpv-controller-768f47467f-bqj2q csi-provisioner W0112 16:10:31.952776 1 capacity.go:552] Retrying capacity.workItem{segment:(*topology.Segment)(0xc000c00168), storageClassName:"redis-data"} after 10 failures
for all nodes in cluster.
What did you expect to happen:
Clean logs.
Anything else you would like to add:
Workaround is to regenerate all CSIStorageCapacity from scratch with:
kubectl delete CSIStorageCapacity --all
Environment:
- LVM Driver version: 1.8.0
- Kubernetes version (use
kubectl version): v1.34.1
- Kubernetes installer & version: kubeadm, v1.33.7
- Cloud provider or hardware configuration: Bare metal
- OS (e.g. from
/etc/os-release): Ubuntu 22.04
What steps did you take and what happened:
I have upgraded lvm-localpv to current latest release (1.8.0).
All pods is fine (running + ready), but there is endless errors in controller log:
for all nodes in cluster.
What did you expect to happen:
Clean logs.
Anything else you would like to add:
Workaround is to regenerate all CSIStorageCapacity from scratch with:
Environment:
kubectl version): v1.34.1/etc/os-release): Ubuntu 22.04