-
Notifications
You must be signed in to change notification settings - Fork 31
VMDK file not deleted when deleting K8s PVC - because vmdk still attached to worker node #351
Comments
more info: Deleting the POD will automatically detach the VMDK from worker nodes where these POD were living. Don't know if it's a K8s issue here or not, but this particular scenario should have some kind of protection here. |
@guillierf There is no issue here. Sequence should be. When you delete PVC, Kubernetes will try deleting associated PV, but since disks are attached to the Node VM, they can not be deleted. You can ignore errors you are seeing on the vCenter, as this is expected. Kubernetes keeps re-trying failed operations. So once you delete the Pod, VMDK will be detached from the Node VM, and eventually PVs will be deleted. Let me know if we can close this issue? |
I replayed the test: I can see the worker nodes gets reconfigured (VMDK disks detached from worker nodes VM) but VMDK files still remain on Datastore (for 1 hour now). |
@guillierf you just need to delete the PVC, and not the PV resource. Kubernetes will delete PV associated with PVC along with VMDK. Can you try with |
@divyenpatel but the MAIN point of this thread/issue is that there is not so much safeguard implemented here. I believe stronger code should be implemented to make sure user cannot perform undesired actions. |
@guillierf Yes, these limitations will be addressed by Kubernetes. You can refer to the following proposals.
|
thanks @divyenpatel !!! |
"Can you try with yes, this sequence works 100% fine. VMDK is correctly deleted on Datastore. |
Thank you for confirming. I am closing this issue. |
I used the 'Guestbook Application with Dynamic Provisioning' example
(https://vmware.github.io/vsphere-storage-for-kubernetes/documentation/guestbook.html)
steps:
1/ kubectl create -f redis-sc.yaml
2/ kubectl create -f redis-master-claim.yaml -f redis-slave-claim.yaml
3/ kubectl create -f guestbook-all-in-one.yaml
PV, PVC and VMDK were correctly created on step 2/
now, I did:
$ kubectl delete pvc redis-master-claim redis-slave-claim
persistentvolumeclaim "redis-master-claim" deleted
persistentvolumeclaim "redis-slave-claim" deleted
kubectl get pv gives:
$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-050e8a4c-ba7d-11e7-9288-005056841f8e 2Gi RWO Delete Failed default/redis-master-claim thin-disk 9m
pvc-051077ed-ba7d-11e7-9288-005056841f8e 2Gi RWO Delete Failed default/redis-slave-claim thin-disk 9m
on vCenter, I see many attempts to delete the VMDK files but they fail because the VMDK files were still attached to worker nodes.
so looks like the process of detaching VMDK disk to worker node is missing here.
The text was updated successfully, but these errors were encountered: