Skip to content
This repository has been archived by the owner on Sep 7, 2022. It is now read-only.

Deleting PV on vSphereVolume does not remove files from datastore #340

Closed
snsumner opened this issue Oct 24, 2017 · 5 comments
Closed

Deleting PV on vSphereVolume does not remove files from datastore #340

snsumner opened this issue Oct 24, 2017 · 5 comments
Assignees

Comments

@snsumner
Copy link

snsumner commented Oct 24, 2017

/kind bug

What happened: When deleting a persistent volume with retain policy of retain it did not remove the files on the datastore.

What you expected to happen: I expect the associated vmdk file of the PV to be removed.

How to reproduce it (as minimally and precisely as possible):

[Tue Oct 24 16:52:52 core@jumpbox:~/github/snsumner/elasticsearch-operator-helm ] $ kubectl describe pv pvc-5f955eda-b8f1-11e7-958b-005056a6bd8e
Name:        pvc-5f955eda-b8f1-11e7-958b-005056a6bd8e
Labels:        <none>
Annotations:    kubernetes.io/createdby=vsphere-volume-dynamic-provisioner
        pv.kubernetes.io/bound-by-controller=yes
        pv.kubernetes.io/provisioned-by=kubernetes.io/vsphere-volume
StorageClass:    fast
Status:        Released
Claim:        elasticsearch/es-data-es-data-example-es-cluster-fast-0
Reclaim Policy:    Retain
Access Modes:    RWO
Capacity:    10Gi
Message:
Source:
    Type:    vSphereVolume (a Persistent Disk resource in vSphere)
    VolumePath:        StoragePolicyName:    %v

    FSType:                [datastore12] kubevols/kubernetes-dynamic-pvc-5f955eda-b8f1-11e7-958b-005056a6bd8e.vmdk
%!(EXTRA string=ext2, string=)Events:    <none>
[Tue Oct 24 16:58:05 core@jumpbox:~/github/snsumner/elasticsearch-operator-helm ] $ kubectl delete pv pvc-5f955eda-b8f1-11e7-958b-005056a6bd8e
persistentvolume "pvc-5f955eda-b8f1-11e7-958b-005056a6bd8e" deleted
[root@esx2:/vmfs/volumes/599f1776-ae9f5780-ab5f-6805ca1a7c35/kubevols] ls -ls kubernetes-dynamic-pvc-5f955eda-b8f1-11e7-958b-005056a6bd8e*
10485760 -rw-------    1 root     root     10737418240 Oct 24 19:35 kubernetes-dynamic-pvc-5f955eda-b8f1-11e7-958b-005056a6bd8e-flat.vmdk
     0 -rw-------    1 root     root           521 Oct 24 19:28 kubernetes-dynamic-pvc-5f955eda-b8f1-11e7-958b-005056a6bd8e.vmdk

I co-worker of mine pointed out this snippet:
https://kubernetes.io/docs/concepts/storage/persistent-volumes/#retaining

I the PV was originally created with RetainPolicy of Retain. I ran this kubectl command:

kubectl patch pv pvc-04d10f5b-b8fa-11e7-958b-005056a6bd8e -p '{"spec":{"persistentVolumeReclaimPolicy":"Delete"}}'

But I noticed that the PV was still set to Retain. Perhaps this is just my lack of knowledge around k8s but shouldn't it have changed it to Delete and then once it set to Delete and you delete the PV the provisioner will delete the files for you automatically?

Environment:

  • Kubernetes version (use kubectl version):
[Tue Oct 24 17:05:54 core@jumpbox:~/github/snsumner/elasticsearch-operator-helm ] $ kubectl version
Client Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.5", GitCommit:"17d7182a7ccbb167074be7a87f0a68bd00d58d97", GitTreeState:"clean", BuildDate:"2017-08-31T09:14:02Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.5+coreos.1", GitCommit:"070d238cd2ec359928548e486a9171b498573181", GitTreeState:"clean", BuildDate:"2017-09-25T20:07:57Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
  • Cloud provider or hardware configuration**: CoreOS 1.7.5.tectonic.1 / vSphere 6.5 u1
  • OS (e.g. from /etc/os-release):
core@worker-demo-01 /etc $ cat os-release
NAME="Container Linux by CoreOS"
ID=coreos
VERSION=1520.6.0
VERSION_ID=1520.6.0
BUILD_ID=2017-10-12-0349
PRETTY_NAME="Container Linux by CoreOS 1520.6.0 (Ladybug)"
ANSI_COLOR="38;5;75"
HOME_URL="https://coreos.com/"
BUG_REPORT_URL="https://issues.coreos.com"
COREOS_BOARD="amd64-usr"
  • Kernel (e.g. uname -a):
core@worker-demo-01 /etc $ uname -a
Linux worker-demo-01 4.13.5-coreos-r2 #1 SMP Thu Oct 12 03:21:09 UTC 2017 x86_64 AMD Ryzen 7 1700 Eight-Core Processor AuthenticAMD GNU/Linux
@divyenpatel
Copy link

@snsumner Can you share yaml files you have used for provisioning PV.

You can change reclaim policy of PV from Retain to Delete before deleting PV.

Steps are available at https://kubernetes.io/docs/tasks/administer-cluster/change-pv-reclaim-policy/#changing-the-reclaim-policy-of-a-persistentvolume

Can you try out this?

CC: @BaluDontu @tusharnt

@divyenpatel
Copy link

@snsumner Just noticed you have already tried out changing policy from 'Retain' to 'Delete'.
Please share your YAML files and we can try this out locally.

@snsumner
Copy link
Author

The PV are automatically getting generated by the Elasticsearch Operator.

Here is the YAML of one of those PV that is getting created:

[Fri Oct 27 09:53:59 core@jumpbox:~/.kube ] $ kubectl get pv pvc-cb8a189b-b910-11e7-958b-005056a6bd8e --output=yaml
apiVersion: v1
kind: PersistentVolume
metadata:
  annotations:
    kubernetes.io/createdby: vsphere-volume-dynamic-provisioner
    pv.kubernetes.io/bound-by-controller: "yes"
    pv.kubernetes.io/provisioned-by: kubernetes.io/vsphere-volume
  creationTimestamp: 2017-10-24T23:12:24Z
  name: pvc-cb8a189b-b910-11e7-958b-005056a6bd8e
  resourceVersion: "2529082"
  selfLink: /api/v1/persistentvolumes/pvc-cb8a189b-b910-11e7-958b-005056a6bd8e
  uid: cbb08af6-b910-11e7-958b-005056a6bd8e
spec:
  accessModes:
  - ReadWriteOnce
  capacity:
    storage: 10Gi
  claimRef:
    apiVersion: v1
    kind: PersistentVolumeClaim
    name: es-data-es-master-example-es-cluster-fast-0
    namespace: elasticsearch
    resourceVersion: "2529047"
    uid: cb8a189b-b910-11e7-958b-005056a6bd8e
  persistentVolumeReclaimPolicy: Delete
  storageClassName: fast
  vsphereVolume:
    fsType: ext2
    volumePath: '[datastore12] kubevols/kubernetes-dynamic-pvc-cb8a189b-b910-11e7-958b-005056a6bd8e.vmdk'
status:
  phase: Bound

For more details about Elasticsearch Operator refer to here: https://github.com/upmc-enterprises/elasticsearch-operator

I created a helm chart to deploy Elasticsearch Operator on Tectonic using vSphereVolume which is located here: https://github.com/snsumner/elasticsearch-operator-helm

This is the storage class I used for vSphereVolumes:

kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
  name: fast
provisioner: kubernetes.io/vsphere-volume
parameters:
    diskformat: zeroedthick
    fstype:     ext2

@shaominchen
Copy link

Thanks @snsumner for sharing the yaml file. We will verify and update here.

@shaominchen
Copy link

shaominchen commented Oct 27, 2017

@snsumner This is not an issue. You should try to delete the PVC instead of the PV. Kubernetes should prevent user from deleting a PV that is bound to a PVC - see kubernetes/enhancements#499.

I'm closing this issue.

ashahi1 pushed a commit that referenced this issue Apr 11, 2018
Fixes #340

Adds functionality to see logs with various level of verbosity.

Currently there are two verbosity levels: 0 and 1
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

3 participants