Skip to content

fix(provisioning): Add ownerRef to provisioned volumes#436

Open
Gufran wants to merge 1 commit intoopenebs:developfrom
Gufran:enhance/volume-with-ownerRef
Open

fix(provisioning): Add ownerRef to provisioned volumes#436
Gufran wants to merge 1 commit intoopenebs:developfrom
Gufran:enhance/volume-with-ownerRef

Conversation

@Gufran
Copy link
Copy Markdown

@Gufran Gufran commented Jan 23, 2026

Pull Request template

Please, go through these steps before you submit a PR.

Why is this PR required? What issue does it fix?:

Currently the volumes hang around when the node is deleted, which leads to operational toil and alerts. Since the node and the csinode are both gone, it is safe to assume that the actual volume is also lost, in that case we can rely on the built in mechanism to clear the stale references. This is how local-static-provisioner does it as well.

What this PR does?:

Add ownerRef to volumes

Does this PR require any upgrade changes?:

no

If the changes in this PR are manually verified, list down the scenarios covered::

none

Any additional information for your reviewer? :

none

Checklist:

  • Fixes #
  • PR Title follows the convention of <type>(<scope>): <subject>
  • Has the change log section been updated?
  • Commit has unit tests
  • Commit has integration tests
  • (Optional) Are upgrade changes included in this PR? If not, mention the issue/PR to track:
  • (Optional) If documentation changes are required, which issue on https://github.com/openebs/openebs-docs is used to track them:

Currently the volumes hang around when the node is deleted, which leads
to operational toil and alerts. Since the node and the csinode are both
gone, it is safe to assume that the actual volume is also lost, in that
case we can rely on the built in mechanism to clear the stale
references. This is how local-static-provisioner does it as well.

Signed-off-by: Mohammad Gufran <dogabhai@gmail.com>
@Gufran Gufran requested a review from a team as a code owner January 23, 2026 12:00
@codecov
Copy link
Copy Markdown

codecov bot commented Jan 23, 2026

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 98.46%. Comparing base (8feeaa0) to head (b7e5ca4).
⚠️ Report is 159 commits behind head on develop.

Additional details and impacted files
@@             Coverage Diff             @@
##           develop     #436      +/-   ##
===========================================
- Coverage    98.66%   98.46%   -0.20%     
===========================================
  Files            2        2              
  Lines          673      979     +306     
===========================================
+ Hits           664      964     +300     
- Misses           5        8       +3     
- Partials         4        7       +3     
Flag Coverage Δ
bddtests 98.46% <ø> (-0.20%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@abhilashshetty04
Copy link
Copy Markdown
Member

Hi @Gufran , We have added fix sometime back: #413

Basically we cleanup lvmvolume CR if we can't find the k8s node in the cluster.

@Gufran
Copy link
Copy Markdown
Author

Gufran commented Feb 5, 2026

thanks for pointing it out. I'll update our version and see if that helps. any specific reason to not use owner references?

@tiagolobocastro
Copy link
Copy Markdown
Member

thanks for pointing it out. I'll update our version and see if that helps. any specific reason to not use owner references?

One downside could be user would have to recreate the CRs if they also move the disk to another node?

What about the lvmnode CR btw?

I'm probably ok with getting this in, but not enabled by default

@Gufran
Copy link
Copy Markdown
Author

Gufran commented Feb 10, 2026

One downside could be user would have to recreate the CRs if they also move the disk to another node?

This is much more likely to happen in a cloud infra compared to what I deal with on a daily basis. Didn't occur to me to think about that!

What about the lvmnode CR btw?
I'm probably ok with getting this in, but not enabled by default

I can update lvmnode as well and put this change behind a config that is disabled by default. Will update the PR soon.

@abhilashshetty04
Copy link
Copy Markdown
Member

thanks for pointing it out. I'll update our version and see if that helps. any specific reason to not use owner references?

Fix which i pointed out will work if user deletes PVC whose lvmvolumes are in nodes which are not in the cluster anymore. I think we cleaning up backend volumes when PVC still exist does not seem right.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants