Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Hung on deleting VolumeSnapshot and VolumeSnapshotContent #676

Closed
crazytaxii opened this issue Mar 28, 2022 · 1 comment
Closed

Hung on deleting VolumeSnapshot and VolumeSnapshotContent #676

crazytaxii opened this issue Mar 28, 2022 · 1 comment

Comments

@crazytaxii
Copy link

crazytaxii commented Mar 28, 2022

What happened:
Hung on deleting VolumeSnapshot and VolumeSnapshotContent. The VolumeSnapshotContent shares the same snapshotHandle with another VolumeSnapshotContent(we call it VolumeSnapshotContent0) in cluster.

What you expected to happen:
VolumeSnapshot and VolumeSnapshotContent should be deleted successfully.

How to reproduce it:

  1. snapshot-controller version: v4.0.0
  2. Use an existed PVC as the source of VolumeSnapshot1
kubectl create -f - <<EOF
---
apiVersion: snapshot.storage.k8s.io/v1
kind: VolumeSnapshot
metadata:
  name: virtual-machine-image-centos-snapshot
  namespace: demo
spec:
  volumeSnapshotClassName: csi-rbdplugin-snapclass
  source:
    persistentVolumeClaimName: virtual-machine-image-centos
EOF

This VolumeSnapshot will dynamically provision a VolumeSnapshotContent(this is VolumeSnapshotContent0).

kubectl describe VolumeSnapshotContent snapcontent-ee6e0e01-91a0-4257-a498-b7e6f8eec528
Status:
  Creation Time:    1647938038698797346
  Ready To Use:     true
  Restore Size:     10737418240
  Snapshot Handle:  0001-0024-021cc886-6265-11ec-b698-001c42a63dfd-0000000000000002-d186c38d-a9ba-11ec-b843-9e7380beabb6
  1. Use snapshotHandle above to create another VolumeSnapshotContent, and we want to bind it to VolumeSnapshot2 in other namespace:
---
apiVersion: snapshot.storage.k8s.io/v1
kind: VolumeSnapshotContent
metadata:
  name: default-boot-volume-snapshot
spec:
  deletionPolicy: Delete
  driver: rbd.csi.ceph.com
  source:
    snapshotHandle: 0001-0024-021cc886-6265-11ec-b698-001c42a63dfd-0000000000000002-d186c38d-a9ba-11ec-b843-9e7380beabb6
  volumeSnapshotClassName: csi-rbdplugin-snapclass
  volumeSnapshotRef:
    name: boot-volume-snapshot
    namespace: default
  1. Create a VolumeSnapshot(this is VolumeSnapshot2) in other namespace and set source to VolumeSnapshotContent2:
---
apiVersion: snapshot.storage.k8s.io/v1
kind: VolumeSnapshot
metadata:
  name: boot-volume-snapshot
spec:
  volumeSnapshotClassName: csi-rbdplugin-snapclass
  source:
    volumeSnapshotContentName: default-boot-volume-snapshot
  1. Delete VolumeSnapshot2 or VolumeSnapshotContent2,both will hang.

Anything else we need to know?:

Environment:

  • Driver version: ceph rbd csi driver.
  • Ceph CSI version: v3.5.1
  • Kubernetes version (use kubectl version): v1.20.15
  • OS (e.g. from /etc/os-release): CentOS 7.9
  • Kernel (e.g. uname -a): 5.17.0-1.el7.elrepo.x86_64
@crazytaxii
Copy link
Author

VolumeSnapshotContent2 must be created with annotation:

---
apiVersion: snapshot.storage.k8s.io/v1
kind: VolumeSnapshotContent
metadata:
  name: default-boot-volume-snapshot
  annotations:
    snapshot.storage.kubernetes.io/deletion-secret-namespace: default
    snapshot.storage.kubernetes.io/deletion-secret-name: csi-rbd-secret
spec:
  deletionPolicy: Delete
  driver: rbd.csi.ceph.com
  source:
    snapshotHandle: 0001-0024-021cc886-6265-11ec-b698-001c42a63dfd-0000000000000002-d186c38d-a9ba-11ec-b843-9e7380beabb6
  volumeSnapshotClassName: csi-rbdplugin-snapclass
  volumeSnapshotRef:
    name: boot-volume-snapshot
    namespace: default

a corresponding issue: #251

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant