-
Notifications
You must be signed in to change notification settings - Fork 31
VMDK files on datastore not removed when PV is deleted #345
Comments
@guillierf when you delete PVC with retain policy set to If you have used PVCs in the PODs and just deleted PODs, and disks are not yet detached from Node VMs. You may see errors on the VC tasks, but eventually when disks get detached from Nodes, they should be deleted successfully, as Kubernetes keeps re-trying failed operations. |
@guillierf I have re-produced the issue locally, with following sequence.
You may be following the same sequence. But if you delete PVC first, then PV gets deleted automatically. When PVs are provisioned using PVCs then they should be deleted using PVCs. If PVs are created statically, then deleting PV deletes the associated vmdk. |
@divyenpatel deleting first the PV does NOT delete the VMDK files on Datastore. |
@divyenpatel if a PVC is bound to PV and someone tries to delete the PV, K8s should throw an error message stating the PV cannot be deleted. I guess K8s has to fix this part. it makes sense that deleting PVC will delete the PV which will delete the VMDK (because here, PVC is 1:1 mapped with PV - and PVC claim the whole capacity of the PV) |
As commented at #351 (comment), Sequence should be Closing this issue. |
Using the example "Deploying Shared MongoDB Cluster" on my K8s cluster (deployed by KuBo release 0.7.0).
I have created the Storage Class and the PVC.
$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
pvc100gb-101 Bound pvc-cb026b44-b9ca-11e7-95bb-00505684477a 100Gi RWO mongoshardsc 43s
pvc100gb-102 Bound pvc-cb042f4c-b9ca-11e7-95bb-00505684477a 100Gi RWO mongoshardsc 43s
pvc100gb-201 Bound pvc-cd025d98-b9ca-11e7-9288-005056841f8e 100Gi RWO mongoshardsc 39s
pvc100gb-202 Bound pvc-cd03d41f-b9ca-11e7-9288-005056841f8e 100Gi RWO mongoshardsc 39s
pvc100gb-301 Bound pvc-ce7c4a5c-b9ca-11e7-95bb-00505684477a 100Gi RWO mongoshardsc 37s
pvc100gb-302 Bound pvc-ce7d7686-b9ca-11e7-95bb-00505684477a 100Gi RWO mongoshardsc 37s
pvc100gb-401 Bound pvc-d02569c1-b9ca-11e7-9288-005056841f8e 100Gi RWO mongoshardsc 34s
pvc100gb-402 Bound pvc-d02710ff-b9ca-11e7-9288-005056841f8e 100Gi RWO mongoshardsc 34s
pvc5gb-101 Bound pvc-cb05768f-b9ca-11e7-95bb-00505684477a 5Gi RWO mongoshardsc 43s
pvc5gb-201 Bound pvc-cd066d7d-b9ca-11e7-9288-005056841f8e 5Gi RWO mongoshardsc 39s
pvc5gb-301 Bound pvc-ce7f67f9-b9ca-11e7-95bb-00505684477a 5Gi RWO mongoshardsc 37s
pvc5gb-401 Bound pvc-d0288489-b9ca-11e7-9288-005056841f8e 5Gi RWO mongoshardsc 34s
pvcmongoc20gb-101 Bound pvc-cb009343-b9ca-11e7-95bb-00505684477a 20Gi RWO mongoshardsc 43s
pvcmongoc20gb-201 Bound pvc-cd0008e0-b9ca-11e7-9288-005056841f8e 20Gi RWO mongoshardsc 39s
pvcmongoc20gb-301 Bound pvc-ce7ab3d4-b9ca-11e7-95bb-00505684477a 20Gi RWO mongoshardsc 37s
pvcmongoc20gb-401 Bound pvc-d022c67d-b9ca-11e7-9288-005056841f8e 20Gi RWO mongoshardsc 34s
$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-cb026b44-b9ca-11e7-95bb-00505684477a 100Gi RWO Delete Bound default/pvc100gb-101 mongoshardsc 1m
pvc-cb042f4c-b9ca-11e7-95bb-00505684477a 100Gi RWO Delete Bound default/pvc100gb-102 mongoshardsc 1m
pvc-cb05768f-b9ca-11e7-95bb-00505684477a 5Gi RWO Delete Bound default/pvc5gb-101 mongoshardsc 1m
pvc-cd0008e0-b9ca-11e7-9288-005056841f8e 20Gi RWO Delete Bound default/pvcmongoc20gb-201 mongoshardsc 1m
pvc-cd025d98-b9ca-11e7-9288-005056841f8e 100Gi RWO Delete Bound default/pvc100gb-201 mongoshardsc 1m
pvc-cd03d41f-b9ca-11e7-9288-005056841f8e 100Gi RWO Delete Bound default/pvc100gb-202 mongoshardsc 1m
pvc-ce7ab3d4-b9ca-11e7-95bb-00505684477a 20Gi RWO Delete Bound default/pvcmongoc20gb-301 mongoshardsc 1m
pvc-ce7d7686-b9ca-11e7-95bb-00505684477a 100Gi RWO Delete Bound default/pvc100gb-302 mongoshardsc 1m
=> Everything is OK
I can see all the VMDK files created on the datastore (NFS backend here).
But when I deleted all the PV and PVC, VMDK files are still present on the datastore.
[PV has reclaim policy=Delete here]
The text was updated successfully, but these errors were encountered: