CVE-2021-20288 - ceph-csi is using insecure global_id reclaim #2063
Labels
component/build
Issues and PRs related to compiling Ceph-CSI
dependency/ceph
depends on core Ceph functionality
question
Further information is requested
Describe the bug
Whey trying to deploy ceph-csi to use a recently upgraded ceph cluster, the provisioner (RBD, in my case) cannot connect to the ceph cluster -- logs say "permission denied", but in digging in the ceph logs, it trips on the global_id reclaim issue mentioned in the subject CVE:
https://docs.ceph.com/en/latest/security/CVE-2021-20288/
With the ceph cluster properly secured to prevent insecure global_id reuse, ceph-csi cannot connect. When I disabled the protection in the cluster (using the information at the bottom of the above web page), PVCs were appropriately processed and PVs allocated.
Environment details
My Ceph cluster is currently sitting at v16.2.3, but it also occurred when it was at v15.2.11.
I was using the canary image of ceph-csi, then backed down to v3.3.0 in case it was a recent development issue. I'm installing from manifests (no helm charts).
My Kubernetes clusters:
Steps to reproduce
Secure ceph cluster according to instructions in the CVE
Deploy manifests as described in documentation
Deploy StorageClass using standard manifest
Deploy sample PVC from distribution
Actual results
logs from csi-rbd-provisioner module show repeated retries, all of which result in "rados permission denied" errors
logs from ceph mon show this whenever the provisioner tries to connect:
cephx server client.admin: attempt to reclaim global_id xxxxxx without presenting ticket
cephx server client.admin: could not verify old ticket
The reference to "global_id" recalled the CVE and the ceph config change I had to make to get the warnings to be quiet (i.e. prevent insecure global_id reuse). When I reverted the config back to the insecure state, the PVC was immediately resolved and the PV created.
Expected behavior
ceph-csi properly reclaims global_id so that I can properly secure my ceph cluster.
Logs
Log snippets included above
The text was updated successfully, but these errors were encountered: