Skip to content
This repository has been archived by the owner on Sep 7, 2022. It is now read-only.

VMDK files on datastore not removed when PV is deleted #345

Closed
guillierf opened this issue Oct 25, 2017 · 5 comments
Closed

VMDK files on datastore not removed when PV is deleted #345

guillierf opened this issue Oct 25, 2017 · 5 comments
Assignees

Comments

@guillierf
Copy link

Using the example "Deploying Shared MongoDB Cluster" on my K8s cluster (deployed by KuBo release 0.7.0).

I have created the Storage Class and the PVC.

$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
pvc100gb-101 Bound pvc-cb026b44-b9ca-11e7-95bb-00505684477a 100Gi RWO mongoshardsc 43s
pvc100gb-102 Bound pvc-cb042f4c-b9ca-11e7-95bb-00505684477a 100Gi RWO mongoshardsc 43s
pvc100gb-201 Bound pvc-cd025d98-b9ca-11e7-9288-005056841f8e 100Gi RWO mongoshardsc 39s
pvc100gb-202 Bound pvc-cd03d41f-b9ca-11e7-9288-005056841f8e 100Gi RWO mongoshardsc 39s
pvc100gb-301 Bound pvc-ce7c4a5c-b9ca-11e7-95bb-00505684477a 100Gi RWO mongoshardsc 37s
pvc100gb-302 Bound pvc-ce7d7686-b9ca-11e7-95bb-00505684477a 100Gi RWO mongoshardsc 37s
pvc100gb-401 Bound pvc-d02569c1-b9ca-11e7-9288-005056841f8e 100Gi RWO mongoshardsc 34s
pvc100gb-402 Bound pvc-d02710ff-b9ca-11e7-9288-005056841f8e 100Gi RWO mongoshardsc 34s
pvc5gb-101 Bound pvc-cb05768f-b9ca-11e7-95bb-00505684477a 5Gi RWO mongoshardsc 43s
pvc5gb-201 Bound pvc-cd066d7d-b9ca-11e7-9288-005056841f8e 5Gi RWO mongoshardsc 39s
pvc5gb-301 Bound pvc-ce7f67f9-b9ca-11e7-95bb-00505684477a 5Gi RWO mongoshardsc 37s
pvc5gb-401 Bound pvc-d0288489-b9ca-11e7-9288-005056841f8e 5Gi RWO mongoshardsc 34s
pvcmongoc20gb-101 Bound pvc-cb009343-b9ca-11e7-95bb-00505684477a 20Gi RWO mongoshardsc 43s
pvcmongoc20gb-201 Bound pvc-cd0008e0-b9ca-11e7-9288-005056841f8e 20Gi RWO mongoshardsc 39s
pvcmongoc20gb-301 Bound pvc-ce7ab3d4-b9ca-11e7-95bb-00505684477a 20Gi RWO mongoshardsc 37s
pvcmongoc20gb-401 Bound pvc-d022c67d-b9ca-11e7-9288-005056841f8e 20Gi RWO mongoshardsc 34s

$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-cb026b44-b9ca-11e7-95bb-00505684477a 100Gi RWO Delete Bound default/pvc100gb-101 mongoshardsc 1m
pvc-cb042f4c-b9ca-11e7-95bb-00505684477a 100Gi RWO Delete Bound default/pvc100gb-102 mongoshardsc 1m
pvc-cb05768f-b9ca-11e7-95bb-00505684477a 5Gi RWO Delete Bound default/pvc5gb-101 mongoshardsc 1m
pvc-cd0008e0-b9ca-11e7-9288-005056841f8e 20Gi RWO Delete Bound default/pvcmongoc20gb-201 mongoshardsc 1m
pvc-cd025d98-b9ca-11e7-9288-005056841f8e 100Gi RWO Delete Bound default/pvc100gb-201 mongoshardsc 1m
pvc-cd03d41f-b9ca-11e7-9288-005056841f8e 100Gi RWO Delete Bound default/pvc100gb-202 mongoshardsc 1m
pvc-ce7ab3d4-b9ca-11e7-95bb-00505684477a 20Gi RWO Delete Bound default/pvcmongoc20gb-301 mongoshardsc 1m
pvc-ce7d7686-b9ca-11e7-95bb-00505684477a 100Gi RWO Delete Bound default/pvc100gb-302 mongoshardsc 1m

=> Everything is OK
I can see all the VMDK files created on the datastore (NFS backend here).

But when I deleted all the PV and PVC, VMDK files are still present on the datastore.
[PV has reclaim policy=Delete here]

@divyenpatel
Copy link

@guillierf when you delete PVC with retain policy set to delete. Associated PV and disk should be deleted automatically. Do you see any errors on the tasks on VC?

If you have used PVCs in the PODs and just deleted PODs, and disks are not yet detached from Node VMs. You may see errors on the VC tasks, but eventually when disks get detached from Nodes, they should be deleted successfully, as Kubernetes keeps re-trying failed operations.

@divyenpatel
Copy link

divyenpatel commented Oct 26, 2017

@guillierf I have re-produced the issue locally, with following sequence.

  1. Create StorageClass.
  2. Create PVC with Default Retain Policy (Delete)
  3. Verified PV is provisioned and Bound.
  4. Delete PV. Upon deleting PV, PVC status becomes Lost and PVC is getting removed automatically by kubernetes. But associated vmdk remains.

You may be following the same sequence.

But if you delete PVC first, then PV gets deleted automatically. When PVs are provisioned using PVCs then they should be deleted using PVCs.

If PVs are created statically, then deleting PV deletes the associated vmdk.
I will confirm this workflow with kubernetes storage community and post update here.

@guillierf
Copy link
Author

@divyenpatel
yes, this is what I saw in my lab this morning also.

deleting first the PV does NOT delete the VMDK files on Datastore.
deleting first the PVC WILL delete the VMDK files on Datastore

@guillierf
Copy link
Author

@divyenpatel
I think I understand now what you mean in the last part of your message.

if a PVC is bound to PV and someone tries to delete the PV, K8s should throw an error message stating the PV cannot be deleted. I guess K8s has to fix this part.

it makes sense that deleting PVC will delete the PV which will delete the VMDK (because here, PVC is 1:1 mapped with PV - and PVC claim the whole capacity of the PV)

@divyenpatel
Copy link

As commented at #351 (comment), Sequence should be delete the PVC and not delete PV resource directly.

Closing this issue.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

2 participants