Skip to content

Wrong capacity reported for CSI withn thin pool when VolumeBindingMode: WaitForFirstConsumer #463

@a-bali

Description

@a-bali

What steps did you take and what happened:

I have 2 nodes (talos1 and talos2), both with 1-1 PV and VG, 100% allocated to a thin pool:

openebs-lvm-localpv-node-2qqxr:/# pvs
  PV             VG            Fmt  Attr PSize    PFree
  /dev/nvme0n1p7 vg-talos-pool lvm2 a--  <226.79g    0
openebs-lvm-localpv-node-2qqxr:/# vgs
  VG            #PV #LV #SN Attr   VSize    VFree
  vg-talos-pool   1   6   0 wz--n- <226.79g    0
openebs-lvm-localpv-node-2qqxr:/# lvs
  LV                                       VG            Attr       LSize    Pool                   Origin Data%  Meta%  Move Log Cpy%Sync Convert
  pvc-0b990e91-e389-42ba-b67d-e4aefebad386 vg-talos-pool Vwi-aotz--    2.00g vg-talos-pool_thinpool        16.87
  pvc-2665ab8a-9434-4eba-a1ff-cd52c560f1ae vg-talos-pool Vwi-aotz--   10.00g vg-talos-pool_thinpool        14.31
  pvc-75db57d0-d406-4331-b41e-ece024f58394 vg-talos-pool Vwi-aotz--    8.00g vg-talos-pool_thinpool        6.01
  pvc-d139f8fe-897c-4581-8c06-1b9a27ffc0bf vg-talos-pool Vwi-aotz--   20.00g vg-talos-pool_thinpool        5.54
  pvc-e71b4218-8444-44fd-ac4c-488f08ec767b vg-talos-pool Vwi-aotz--    5.00g vg-talos-pool_thinpool        12.62
  vg-talos-pool_thinpool                   vg-talos-pool twi-aotz-- <226.34g                               1.76   8.02
openebs-lvm-localpv-node-2qqxr:/#

(same for the other)

If I use VolumeBindingMode: WaitForFirstConsumer in the StorageClass that uses thinProvisioning, the reported capacity is zero, therefore the PV never gets scheduled.

~/dev/kube/homecluster main* ❯ kubectl -n openebs get csistoragecapacities csisc-cq89g -o yaml
apiVersion: storage.k8s.io/v1
capacity: "0"
kind: CSIStorageCapacity
metadata:
  creationTimestamp: "2026-03-29T12:04:10Z"
  generateName: csisc-
  labels:
    csi.storage.k8s.io/drivername: local.csi.openebs.io
    csi.storage.k8s.io/managed-by: external-provisioner
  name: csisc-cq89g
  namespace: openebs
  ownerReferences:
  - apiVersion: apps/v1
    controller: true
    kind: ReplicaSet
    name: openebs-lvm-localpv-controller-5996c4fbfc
    uid: def139e1-2edf-48a5-a2c0-af14447955ee
  resourceVersion: "773874"
  uid: 6bef2e8d-569d-4de8-9661-922b7cff2929
nodeTopology:
  matchLabels:
    kubernetes.io/hostname: talos2
    openebs.io/nodename: talos2
storageClassName: openebs-lvmpv

If I use VolumeBindingMode: Immediate, the PV will always get scheduled to the first node, not following the affinity of the pod.

How to resolve this?

Version 1.8.0, StorageClass is as follows:

~/dev/kube/homecluster main* ❯ kubectl get sc openebs-lvmpv -o yaml
allowVolumeExpansion: true
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  annotations:
    helm.sh/hook: post-install,post-upgrade
    kubectl.kubernetes.io/last-applied-configuration: |
      {"allowVolumeExpansion":true,"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"helm.sh/hook":"post-install,post-upgrade","storageclass.kubernetes.io/is-default-class":"true"},"name":"openebs-lvmpv"},"parameters":{"storage":"lvm","thinProvision":"yes","vgpattern":"vg-.*"},"provisioner":"local.csi.openebs.io","reclaimPolicy":"Retain"}
    storageclass.kubernetes.io/is-default-class: "true"
  creationTimestamp: "2026-03-29T17:18:41Z"
  name: openebs-lvmpv
  resourceVersion: "965810"
  uid: 15811494-31f1-4719-82e1-6671b5c60dc1
parameters:
  storage: lvm
  thinProvision: "yes"
  vgpattern: vg-.*
provisioner: local.csi.openebs.io
reclaimPolicy: Retain
volumeBindingMode: Immediate

LVMNodes:

/dev/kube/homecluster main* ❯ kubectl -n openebs get lvmnode talos1 -o yaml
apiVersion: local.openebs.io/v1alpha1
kind: LVMNode
metadata:
  creationTimestamp: "2026-03-28T11:55:46Z"
  generation: 44
  name: talos1
  namespace: openebs
  ownerReferences:
  - apiVersion: v1
    controller: true
    kind: Node
    name: talos1
    uid: adf953d8-0047-4165-8fff-4b268f75d9cf
  resourceVersion: "966636"
  uid: c9a50c1a-098b-4efc-8f14-75723dfa602a
volumeGroups:
- allocationPolicy: 0
  free: "0"
  lvCount: 16
  maxLv: 0
  maxPv: 0
  metadataCount: 1
  metadataFree: 501Ki
  metadataSize: 1020Ki
  metadataUsedCount: 1
  missingPvCount: 0
  name: vg-talos-pool
  permissions: 0
  pvCount: 1
  size: 232228Mi
  snapCount: 0
  thinPools:
  - free: "240964796883"
    name: vg-talos-pool_thinpool
    size: 231772Mi
  uuid: HDi2b0-zecQ-bpiD-qnSM-1iLZ-81D8-uKTy39
~/dev/kube/homecluster main* ❯ kubectl -n openebs get lvmnode talos2 -o yaml
apiVersion: local.openebs.io/v1alpha1
kind: LVMNode
metadata:
  creationTimestamp: "2026-03-28T11:55:44Z"
  generation: 92
  name: talos2
  namespace: openebs
  ownerReferences:
  - apiVersion: v1
    controller: true
    kind: Node
    name: talos2
    uid: 3e6f2a71-4ad1-40c7-bfb9-fd31eb3f4253
  resourceVersion: "948326"
  uid: d9fbfacd-1502-4331-8d0f-ee120c451798
volumeGroups:
- allocationPolicy: 0
  free: "0"
  lvCount: 6
  maxLv: 0
  maxPv: 0
  metadataCount: 1
  metadataFree: 505Ki
  metadataSize: 1020Ki
  metadataUsedCount: 1
  missingPvCount: 0
  name: vg-talos-pool
  permissions: 0
  pvCount: 1
  size: 232228Mi
  snapCount: 0
  thinPools:
  - free: "238753218898"
    name: vg-talos-pool_thinpool
    size: 231772Mi
  uuid: TQ6UoJ-6Twr-q9cG-uKmj-7i72-HLXO-BENs6Z

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions