-
Notifications
You must be signed in to change notification settings - Fork 1.6k
[cephfs] Deletion pvc issues with ceph 14 #1300
Comments
Tried mimic and nautilus, rebuilt provisioner docker image to the latest code, tried different images, checked fattr as @zhoubofsy mentioned here #860 , nothing helped The issue, as follows: pvc provisions well, client secret creates in cephfs namespace as well. |
I encountered the same problem |
@AlawnWong this project is outdated and no longer supported. |
@xriser |
@Ranler, as I said I have tried mimic and nautilus. |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
I have found the solutions: no CEPH_VOLUME_GROUP enviroment when delete pv |
Thanks for reporting the issue! This repo is no longer being maintained and we are in the process of archiving this repo. Please see kubernetes/org#1563 for more details. If your issue relates to nfs provisioners, please create a new issue in https://github.com/kubernetes-sigs/nfs-ganesha-server-and-external-provisioner or https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner. Going to close this issue in order to archive this repo. Apologies for the churn and thanks for your patience! 🙏 |
Using with ceph v14 provisioner doesn't delete volume from ceph and user secret as well
The error like follow:
I0321 18:04:38.854980 1 controller.go:1158] delete "pvc-1a61b1c9-9dcf-41a7-b8fc-183799545396": started E0321 18:04:39.936090 1 cephfs-provisioner.go:268] failed to delete share "tst-pvc" for "k8s.default.tst-pvc", err: exit status 1, output: Traceback (most recent call last): File "/usr/local/bin/cephfs_provisioner", line 364, in <module> main() File "/usr/local/bin/cephfs_provisioner", line 360, in main cephfs.delete_share(share, user) File "/usr/local/bin/cephfs_provisioner", line 319, in delete_share self._deauthorize(volume_path, user_id) File "/usr/local/bin/cephfs_provisioner", line 260, in _deauthorize pool_name = self.volume_client._get_ancestor_xattr(path, "ceph.dir.layout.pool") File "/lib/python2.7/site-packages/ceph_volume_client.py", line 756, in _get_ancestor_xattr result = self.fs.getxattr(path, attr) File "cephfs.pyx", line 954, in cephfs.LibCephFS.getxattr (/home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/13.2.1/rpm/el7/BUILD/ceph-13.2.1/build/src/pybind/cephfs/pyrex/cephfs.c:10083) cephfs.ObjectNotFound: [Errno 2] error in getxattr E0321 18:04:39.936184 1 controller.go:1181] delete "pvc-1a61b1c9-9dcf-41a7-b8fc-183799545396": volume deletion failed: exit status 1 W0321 18:04:39.936357 1 controller.go:787] Retrying syncing volume "pvc-1a61b1c9-9dcf-41a7-b8fc-183799545396" because failures 0 < threshold 15 E0321 18:04:39.936437 1 controller.go:802] error syncing volume "pvc-1a61b1c9-9dcf-41a7-b8fc-183799545396": exit status 1 I0321 18:04:39.936499 1 event.go:221] Event(v1.ObjectReference{Kind:"PersistentVolume", Namespace:"", Name:"pvc-1a61b1c9-9dcf-41a7-b8fc-183799545396", UID:"13a518c8-4512-4bfe-a643-2fac09dd06b5", APIVersion:"v1", ResourceVersion:"492116", FieldPath:""}): type: 'Warning' reason: 'VolumeFailedDelete' exit status 1
The text was updated successfully, but these errors were encountered: