Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can't dynamic create PersistentVolume after PVC created. Operation not permitted #904

Closed
drweber opened this issue Mar 31, 2020 · 21 comments · Fixed by #919
Closed

Can't dynamic create PersistentVolume after PVC created. Operation not permitted #904

drweber opened this issue Mar 31, 2020 · 21 comments · Fixed by #919
Labels
bug Something isn't working Priority-0 highest priority issue Release-2.1.0 ReleaseBlocker ReleaseBlocker PRs/Issues..etc

Comments

@drweber
Copy link

drweber commented Mar 31, 2020

Hello everyone. I've got issue with CEPH
I have:

  • RKE cluster v2.3.5:
    3 etcd, 2 controplane, 10 workernodes
    etcd and controlplane have inly Internal NIC
    workernodes have 3 NICs: Internal (internal access only), External (public access), Storage (for CEPH access)
  • Kubernetes 1.16.7
  • ceph-csi by manual and this one
    in CEPH configuration
client.kubernetes
    key: <key>
    caps: [mgr] allow *
    caps: [mon] allow *
    caps: [osd] allow class-read object_prefix rbd_children, allow rwx pool=rbd
╰ kubectl get all | grep csi
pod/csi-rbdplugin-67t8s               3/3   Running  0     22s
pod/csi-rbdplugin-h85qc               3/3   Running  0     22s
pod/csi-rbdplugin-j2sfn               3/3   Running  0     22s
pod/csi-rbdplugin-jbxq8               3/3   Running  0     22s
pod/csi-rbdplugin-k895x               3/3   Running  0     22s
pod/csi-rbdplugin-mfdgw               3/3   Running  0     22s
pod/csi-rbdplugin-provisioner-6956bdfdf9-6nwcj   6/6   Running  0     31s
pod/csi-rbdplugin-provisioner-6956bdfdf9-b8hsv   6/6   Running  0     31s
pod/csi-rbdplugin-provisioner-6956bdfdf9-wjr6k   6/6   Running  0     31s
pod/csi-rbdplugin-psks9               3/3   Running  0     22s
pod/csi-rbdplugin-wd5mn               3/3   Running  0     22s
pod/csi-rbdplugin-wdcc4               3/3   Running  0     22s
pod/csi-rbdplugin-wh8tm               3/3   Running  0     22s
service/csi-metrics-rbdplugin     ClusterIP  10.43.70.156  <none>    8080/TCP,8090/TCP  23s
service/csi-rbdplugin-provisioner   ClusterIP  10.43.118.252  <none>    8080/TCP,8090/TCP  32s
daemonset.apps/csi-rbdplugin  10    10    10   10      10     <none>     23s
deployment.apps/csi-rbdplugin-provisioner   3/3   3      3      32s
replicaset.apps/csi-rbdplugin-provisioner-6956bdfdf9   3     3     3    33s

after apply:

---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
 name: rbd-pvc
spec:
 accessModes:
  - ReadWriteOnce
 volumeMode: Filesystem
 resources:
  requests:
   storage: 1Gi
 storageClassName: csi-rbd-sc

in csi-provisioner :

I0330 15:23:53.381131    1 controller.go:1199] provision "default/rbd-pvc" class "csi-rbd-sc": started
I0330 15:23:53.386399    1 controller.go:494] CreateVolumeRequest {Name:pvc-b8f3d915-b79d-4971-9bc9-3d3b8e6afe08 CapacityRange:required_bytes:1073741824 VolumeCapabilities:[mount:<fs_type:"ext4" mount_flags:"discard" > access_mode:<mode:SINGLE_NODE_WRITER > ] Parameters:map[clusterID:00f829de-9c40-4a0b-b8f2-8e270d3bded7 csi.storage.k8s.io/controller-expand-secret-name:csi-rbd-secret csi.storage.k8s.io/controller-expand-secret-namespace:default csi.storage.k8s.io/fstype:ext4 csi.storage.k8s.io/node-stage-secret-name:csi-rbd-secret csi.storage.k8s.io/node-stage-secret-namespace:default csi.storage.k8s.io/provisioner-secret-name:csi-rbd-secret csi.storage.k8s.io/provisioner-secret-namespace:default imageFeatures:layering pool:rbd] Secrets:map[] VolumeContentSource:<nil> AccessibilityRequirements:<nil> XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I0330 15:23:53.386781    1 event.go:255] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"rbd-pvc", UID:"b8f3d915-b79d-4971-9bc9-3d3b8e6afe08", APIVersion:"v1", ResourceVersion:"5630315", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/rbd-pvc"
I0330 15:23:53.398947    1 connection.go:180] GRPC call: /csi.v1.Controller/CreateVolume
I0330 15:23:53.399004    1 connection.go:181] GRPC request: {"capacity_range":{"required_bytes":1073741824},"name":"pvc-b8f3d915-b79d-4971-9bc9-3d3b8e6afe08","parameters":{"clusterID":"00f829de-9c40-4a0b-b8f2-8e270d3bded7","imageFeatures":"layering","pool":"rbd"},"secrets":"**stripped**","volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4","mount_flags":["discard"]}},"access_mode":{"mode":1}}]}
I0330 15:23:54.519042    1 connection.go:183] GRPC response: {}
I0330 15:23:54.519615    1 connection.go:184] GRPC error: rpc error: code = Internal desc = failed to get IOContext: failed to get connection: connecting failed: rados: ret=1, Operation not permitted
I0330 15:23:54.519669    1 controller.go:1016] Final error received, removing PVC b8f3d915-b79d-4971-9bc9-3d3b8e6afe08 from claims in progress

I can create PersistentVolume in Rancher UI and attach to created StorageClass and it shown as created by external provisioner rbd.csi.ceph.com
Network connection is ok, credentials work

kubectl run ceph-test --image=ceph/ceph:v15.2 --restart=Never -n default:

[root@ceph-test-d6b968c66-n49jl /]# rbd -n client.kubernetes --keyring /etc/ceph/keyring info rbd/pvctest
rbd image 'pvctest':
	size 1 GiB in 256 objects
	order 22 (4 MiB objects)
	snapshot_count: 0
	id: 1ed9de615f3bc8
	block_name_prefix: rbd_data.1ed9de615f3bc8
	format: 2
	features: layering, exclusive-lock, object-map, fast-diff, deep-flatten
	op_features:
	flags:
	create_timestamp: Mon Mar 30 15:53:40 2020
@drweber drweber changed the title Can't create PersistentVolume. Operation not permitted Can't dynamic create PersistentVolume after PVC created. Operation not permitted Mar 31, 2020
@Madhu-1
Copy link
Collaborator

Madhu-1 commented Apr 1, 2020

looks like to be a user permission issue, either you might have specified wrong user keyring in secret or the user cap is not enough to perform rados operations. can you try with below user caps

"mon", "profile rbd",
"mgr", "allow rw",
"osd", "profile rbd",

@nixpanic do you see any other issue?

@flyersa
Copy link

flyersa commented Apr 1, 2020

Hi Madhu,

we also tried profile rbd and its not working, its not a permission issue on ceph. We can manually create volumes in the rbd pool and do all listing operations with they keyring provided for kubernetes flawless, also tested inside one of the csi provisioning containers. The interessting thing is i dont even see the requests in the monitor logs when it complains about operation not permitted

@Madhu-1
Copy link
Collaborator

Madhu-1 commented Apr 1, 2020

@flyersa can you give a try with admin credentials? I just wanted to check is it a permission issue or the ceph-csi issue. why am I asking because we have E2E runs on each PR I don't see any issue with cephcsi code.

if its an issue please provide csi-rbdplugin container logs we can take a look at it.

@Madhu-1
Copy link
Collaborator

Madhu-1 commented Apr 1, 2020

and also provide the cephcsi version you are using

@drweber
Copy link
Author

drweber commented Apr 1, 2020

@Madhu-1 about version
image: quay.io/cephcsi/cephcsi:canary

with admin creds works

╰ kubectl get pvc,pv --all-namespaces
NAMESPACE   NAME                                 STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
default     persistentvolumeclaim/ceph-rbd-pvc   Bound    pvc-48e0d51d-116b-4a27-81f4-83c320e3f392   1Gi        RWO            ceph-rbd       3m25s
NAMESPACE   NAME                                                        CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                  STORAGECLASS   REASON   AGE
            persistentvolume/pvc-48e0d51d-116b-4a27-81f4-83c320e3f392   1Gi        RWO            Delete           Bound    default/ceph-rbd-pvc   ceph-rbd                3m24s

but not with

"mon", "profile rbd",
"mgr", "allow rw",
"osd", "profile rbd",
  Normal   ExternalProvisioning  14s (x4 over 43s)  persistentvolume-controller                                                                       waiting for a volume to be created, either by external provisioner "rbd.csi.ceph.com" or manually created by system administrator
  Normal   Provisioning          5s (x7 over 43s)   rbd.csi.ceph.com_csi-rbdplugin-provisioner-6956bdfdf9-sdsjn_7881a09b-8615-48fb-a3e4-f3bb36e366c2  External provisioner is provisioning volume for claim "default/ceph-rbd-pvc"
  Warning  ProvisioningFailed    4s (x7 over 42s)   rbd.csi.ceph.com_csi-rbdplugin-provisioner-6956bdfdf9-sdsjn_7881a09b-8615-48fb-a3e4-f3bb36e366c2  failed to provision volume with StorageClass "ceph-rbd": rpc error: code = Internal desc = failed to get IOContext: failed to get connection: connecting failed: rados: ret=1, Operation not permitted

but I can create Volume with this creds from CLI in container and I can create PV from Rancher UI also with simple user not admin

@drweber
Copy link
Author

drweber commented Apr 1, 2020

the permissions are now exactly 1:1 the same as admin creds on client.kubernetes
and it still not working

╰ kubectl get pvc,pv --all-namespaces
NAMESPACE   NAME                                 STATUS    VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
default     persistentvolumeclaim/ceph-rbd-pvc   Pending                                      ceph-rbd       3s

...
  Normal   ExternalProvisioning  14s (x4 over 43s)  persistentvolume-controller                                                                       waiting for a volume to be created, either by external provisioner "rbd.csi.ceph.com" or manually created by system administrator
  Normal   Provisioning          5s (x7 over 43s)   rbd.csi.ceph.com_csi-rbdplugin-provisioner-6956bdfdf9-sdsjn_7881a09b-8615-48fb-a3e4-f3bb36e366c2  External provisioner is provisioning volume for claim "default/ceph-rbd-pvc"
  Warning  ProvisioningFailed    4s (x7 over 42s)   rbd.csi.ceph.com_csi-rbdplugin-provisioner-6956bdfdf9-sdsjn_7881a09b-8615-48fb-a3e4-f3bb36e366c2  failed to provision volume with StorageClass "ceph-rbd": rpc error: code = Internal desc = failed to get IOContext: failed to get connection: connecting failed: rados: ret=1, Operation not permitted

so something wrong with csi defiantly

@Madhu-1
Copy link
Collaborator

Madhu-1 commented Apr 1, 2020

instead of canary can you try release cephcsi version ie 2.0.0 or 2.0.1

@Madhu-1
Copy link
Collaborator

Madhu-1 commented Apr 1, 2020

@Madhu-1 about version
image: quay.io/cephcsi/cephcsi:canary

with admin creds works

means you are saying pvc creation is working with admin credentials?

╰ kubectl get pvc,pv --all-namespaces
NAMESPACE   NAME                                 STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
default     persistentvolumeclaim/ceph-rbd-pvc   Bound    pvc-48e0d51d-116b-4a27-81f4-83c320e3f392   1Gi        RWO            ceph-rbd       3m25s
NAMESPACE   NAME                                                        CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                  STORAGECLASS   REASON   AGE
            persistentvolume/pvc-48e0d51d-116b-4a27-81f4-83c320e3f392   1Gi        RWO            Delete           Bound    default/ceph-rbd-pvc   ceph-rbd                3m24s

but not with

"mon", "profile rbd",
"mgr", "allow rw",
"osd", "profile rbd",
  Normal   ExternalProvisioning  14s (x4 over 43s)  persistentvolume-controller                                                                       waiting for a volume to be created, either by external provisioner "rbd.csi.ceph.com" or manually created by system administrator
  Normal   Provisioning          5s (x7 over 43s)   rbd.csi.ceph.com_csi-rbdplugin-provisioner-6956bdfdf9-sdsjn_7881a09b-8615-48fb-a3e4-f3bb36e366c2  External provisioner is provisioning volume for claim "default/ceph-rbd-pvc"
  Warning  ProvisioningFailed    4s (x7 over 42s)   rbd.csi.ceph.com_csi-rbdplugin-provisioner-6956bdfdf9-sdsjn_7881a09b-8615-48fb-a3e4-f3bb36e366c2  failed to provision volume with StorageClass "ceph-rbd": rpc error: code = Internal desc = failed to get IOContext: failed to get connection: connecting failed: rados: ret=1, Operation not permitted

but I can create Volume with this creds from CLI in container and I can create PV from Rancher UI also with simple user not admin

create PV from rancher UI? do you mean create PVC?

@luhailiang0527
Copy link

@drweber i had same problem,i exec kubectl describe pvc ,founded error infomation
Warning ProvisioningFailed 5s (x6 over 24s) rbd.csi.ceph.com_csi-rbdplugin-provisioner-5d958fcbb6-q5l47_8c5927eb-6e53-44f4-98fd-35baf42740e8 failed to provision volume with StorageClass "csi-rbd-sc": rpc error: code = Internal desc = failed to get IOContext: failed to get connection: connecting failed: rados: ret=1, Operation not permitted
how did you solve this problem?

@Madhu-1
Copy link
Collaborator

Madhu-1 commented Apr 7, 2020

Looks like we are hitting this issue if we use any other user than admin

I0407 04:35:30.959988       1 rbd_util.go:545] ID: 22 Req-ID: pvc-326e8625-ec27-4cff-9642-9b5435984add setting disableInUseChecks on rbd volume to: false
I0407 04:35:31.615616       1 rbd_journal.go:279] ID: 22 Req-ID: pvc-326e8625-ec27-4cff-9642-9b5435984add generated Volume ID (0001-0009-rook-ceph-0000000000000001-36a232fc-7889-11ea-9fa7-0242ac110010) and image name (csi-vol-36a232fc-7889-11ea-9fa7-0242ac110010) for request name (pvc-326e8625-ec27-4cff-9642-9b5435984add)
I0407 04:35:31.615717       1 rbd_util.go:140] ID: 22 Req-ID: pvc-326e8625-ec27-4cff-9642-9b5435984add rbd: create csi-vol-36a232fc-7889-11ea-9fa7-0242ac110010 size 1024M (features: layering) using mon 10.96.235.53:6789, pool replicapool 
E0407 04:35:31.621444       1 controllerserver.go:235] ID: 22 Req-ID: pvc-326e8625-ec27-4cff-9642-9b5435984add failed to create volume: failed to get IOContext: failed to get connection: connecting failed: rados: ret=1, Operation not permitted
E0407 04:35:31.733602       1 utils.go:161] ID: 22 Req-ID: pvc-326e8625-ec27-4cff-9642-9b5435984add GRPC error: rpc error: code = Internal desc = failed to get IOContext: failed to get connection: connecting failed: rados: ret=1, Operation not permitted
I0407 04:35:39.783023       1 utils.go:157] ID: 23 Req-ID: pvc-326e8625-ec27-4cff-9642-9b5435984add GRPC call: /csi.v1.Controller/CreateVolume
I0407 04:35:39.783061       1 utils.go:158] ID: 23 Req-ID: pvc-326e8625-ec27-4cff-9642-9b5435984add GRPC request: {"capacity_range":{"required_bytes":1073741824},"name":"pvc-326e8625-ec27-4cff-9642-9b5435984add","parameters":{"clusterID":"rook-ceph","imageFeatures":"layering","imageFormat":"2","pool":"replicapool"},"secrets":"***stripped***","volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}]}
I0407 04:35:39.786212       1 rbd_util.go:545] ID: 23 Req-ID: pvc-326e8625-ec27-4cff-9642-9b5435984add setting disableInUseChecks on rbd volume to: false
I0407 04:35:40.461025       1 rbd_journal.go:279] ID: 23 Req-ID: pvc-326e8625-ec27-4cff-9642-9b5435984add generated Volume ID (0001-0009-rook-ceph-0000000000000001-3bdc338c-7889-11ea-9fa7-0242ac110010) and image name (csi-vol-3bdc338c-7889-11ea-9fa7-0242ac110010) for request name (pvc-326e8625-ec27-4cff-9642-9b5435984add)
I0407 04:35:40.462867       1 rbd_util.go:140] ID: 23 Req-ID: pvc-326e8625-ec27-4cff-9642-9b5435984add rbd: create csi-vol-3bdc338c-7889-11ea-9fa7-0242ac110010 size 1024M (features: layering) using mon 10.96.235.53:6789, pool replicapool 
E0407 04:35:40.492736       1 controllerserver.go:235] ID: 23 Req-ID: pvc-326e8625-ec27-4cff-9642-9b5435984add failed to create volume: failed to get IOContext: failed to get connection: connecting failed: rados: ret=1, Operation not permitted
E0407 04:35:40.681073       1 utils.go:161] ID: 23 Req-ID: pvc-326e8625-ec27-4cff-9642-9b5435984add GRPC error: rpc error: code = Internal desc = failed to get IOContext: failed to get connection: connecting failed: rados: ret=1, Operation not permitted

@nixpanic PTAL

@Madhu-1 Madhu-1 added bug Something isn't working Priority-0 highest priority issue ReleaseBlocker ReleaseBlocker PRs/Issues..etc labels Apr 7, 2020
@Madhu-1
Copy link
Collaborator

Madhu-1 commented Apr 7, 2020

This is happening with canary image, @drweber @luhailiang0527 please use released ceph-csi version

@nixpanic
Copy link
Member

nixpanic commented Apr 7, 2020

This errors comes from pkg/rbd/rbd_util.go#L179:

failed to get IOContext: failed to get connection: connecting failed: rados: ret=1, Operation not permitted

That means the rados.Conn connection to the cluster was made, but getting the rados.IOContext was not possible due to Operation not permitted.

However, the ret=1 may suggest that there was no error. The rbd command only checks for errors < 0 and not for ret == 0 like go-ceph does.

From this, I think it is a bug in go-ceph.

Madhu-1 added a commit to Madhu-1/ceph-csi that referenced this issue Apr 7, 2020
Updated E2E to use a normal user other than admin user,
who is having access to create and mount cephfs PVC and
create and map rbd PVC

we will use the user created by rook who is having above
access.

Signed-off-by: Madhu Rajanna [email protected]

This PR adds a test case for ceph#904

Signed-off-by: Madhu Rajanna <[email protected]>
@nixpanic
Copy link
Member

nixpanic commented Apr 7, 2020

Hi @drweber, @flyersa and @luhailiang0527 could you try the following testing image?
quay.io/nixpanic/cephcsi:testing_issue_904 has a hacked version of go-ceph and checks for the same error condition as the rbd command, so it should behave the same too (succeed with non-admin credentials). I do not have the time to test it now, maybe later.

@phlogistonjohn
Copy link
Contributor

@nixpanic as a reminder the go-ceph code prints the absolute value of the "errno" in the error, so I'm not as confident that its a bug. But please do file a go-ceph issue if your altered container reveals its a problem with the error handling.

@nixpanic
Copy link
Member

nixpanic commented Apr 7, 2020

@phlogistonjohn the error case in rbd is ret < 0 whereas go-ceph checks on ret == 0. In the case of this issue, I think the value is set to 1, and then rbd would succeed, go-ceph fail.

@phlogistonjohn
Copy link
Contributor

Oh yes, that's true and we can certainly change that. But skimming some of the ceph sources I don't see an obvious path where this function would return 1 versus the much more likely -1. I'm not saying that its not possible, just suggesting its is more likely to be an access issue given that we format the error with the absolute value of ret.

@drweber
Copy link
Author

drweber commented Apr 7, 2020

looks like adding fixe issue

apiVersion: storage.k8s.io/v1
kind: StorageClass
...
parameters:
  csi.storage.k8s.io/controller-expand-secret-name: secret-csi
  csi.storage.k8s.io/controller-expand-secret-namespace: default

and now it

apiVersion: storage.k8s.io/v1
kind: StorageClass
...
parameters:
  csi.storage.k8s.io/provisioner-secret-name: secret-csi
  csi.storage.k8s.io/provisioner-secret-namespace: default
  csi.storage.k8s.io/node-stage-secret-name: secret-csi
  csi.storage.k8s.io/node-stage-secret-namespace: default
  csi.storage.k8s.io/controller-expand-secret-name: secret-csi
  csi.storage.k8s.io/controller-expand-secret-namespace: default

example of yaml was taken from manual and later updated from

and image quay.io/cephcsi/cephcsi:canary changed to quay.io/cephcsi/cephcsi:v2.0.1

@nixpanic
Copy link
Member

nixpanic commented Apr 8, 2020

Looking into this more, with #919 it is easier to reproduce. The cephcsi:testing_issue_904 image is based on the master branch (:canary tag) and has the same problem.

Using the :v2.0.1 tag is a workaround.

The code under pkg/util/conn_pool.go calls rados.NewConn() without any arguments. It is probably needed to pass the (optional) user (and optional cluster-id) to solve this problem.

nixpanic added a commit to Madhu-1/ceph-csi that referenced this issue Apr 9, 2020
Use the Credentials.ID in combination with the keyfile to connect to the
Ceph cluster. This makes it possible to use different users for
different tasks on the cluster.

Fixes: ceph#904
Signed-off-by: Niels de Vos <[email protected]>
nixpanic pushed a commit to Madhu-1/ceph-csi that referenced this issue Apr 9, 2020
Updated E2E to use a normal user other than admin user,
who is having access to create and mount cephfs PVC and
create and map rbd PVC

we will use the user created by rook who is having above
access.

Signed-off-by: Madhu Rajanna [email protected]

This PR adds a test case for ceph#904

Signed-off-by: Madhu Rajanna <[email protected]>
nixpanic added a commit to Madhu-1/ceph-csi that referenced this issue Apr 9, 2020
Use the Credentials.ID in combination with the keyfile to connect to the
Ceph cluster. This makes it possible to use different users for
different tasks on the cluster.

Fixes: ceph#904
Signed-off-by: Niels de Vos <[email protected]>
nixpanic pushed a commit to Madhu-1/ceph-csi that referenced this issue Apr 9, 2020
Updated E2E to use a normal user other than admin user,
who is having access to create and mount cephfs PVC and
create and map rbd PVC

we will use the user created by rook who is having above
access.

Signed-off-by: Madhu Rajanna [email protected]

This PR adds a test case for ceph#904

Signed-off-by: Madhu Rajanna <[email protected]>
nixpanic pushed a commit to Madhu-1/ceph-csi that referenced this issue Apr 9, 2020
Updated E2E to use a normal user other than admin user,
who is having access to create and mount cephfs PVC and
create and map rbd PVC

we will use the user created by rook who is having above
access.

Signed-off-by: Madhu Rajanna [email protected]

This PR adds a test case for ceph#904

Signed-off-by: Madhu Rajanna <[email protected]>
nixpanic added a commit to Madhu-1/ceph-csi that referenced this issue Apr 9, 2020
Use the Credentials.ID in combination with the keyfile to connect to the
Ceph cluster. This makes it possible to use different users for
different tasks on the cluster.

Fixes: ceph#904
Signed-off-by: Niels de Vos <[email protected]>
nixpanic pushed a commit to Madhu-1/ceph-csi that referenced this issue Apr 9, 2020
Updated E2E to use a normal user other than admin user,
who is having access to create and mount cephfs PVC and
create and map rbd PVC

we will use the user created by rook who is having above
access.

Signed-off-by: Madhu Rajanna [email protected]

This PR adds a test case for ceph#904

Signed-off-by: Madhu Rajanna <[email protected]>
@mergify mergify bot closed this as completed in #919 Apr 9, 2020
mergify bot pushed a commit that referenced this issue Apr 9, 2020
Use the Credentials.ID in combination with the keyfile to connect to the
Ceph cluster. This makes it possible to use different users for
different tasks on the cluster.

Fixes: #904
Signed-off-by: Niels de Vos <[email protected]>
mergify bot pushed a commit that referenced this issue Apr 9, 2020
Updated E2E to use a normal user other than admin user,
who is having access to create and mount cephfs PVC and
create and map rbd PVC

we will use the user created by rook who is having above
access.

Signed-off-by: Madhu Rajanna [email protected]

This PR adds a test case for #904

Signed-off-by: Madhu Rajanna <[email protected]>
@drweber
Copy link
Author

drweber commented Apr 9, 2020

so version v2.1.0 will have a fix ?
@nixpanic @Madhu-1

@Madhu-1
Copy link
Collaborator

Madhu-1 commented Apr 9, 2020

Yes, even this issue is not present in 2.0.1 or 2.0.0 release

@easonboy
Copy link

easonboy commented Oct 15, 2021

quay.io/cephcsi/cephcsi v3.1.0 docker image
There seems to be the same problem,error:= failed to get connection: connecting failed: rados: ret=13, Permission denied. change to image quay.io/cephcsi/cephcsi:canary it's ok ,pls help check, thanks!

explain:

[root@master01 ceph-csi]# git branch

  • (detached from v3.1.0)
    devel

[root@master01 ceph-csi]#
[root@master01 rbd]# kubectl -n ceph-csi describe pod csi-rbdplugin-provisioner-5f9d8fb9c6-4l8xf|grep quay.io/cephcsi/cephcsi
Image: quay.io/cephcsi/cephcsi:canary
Image ID: docker-pullable://quay.io/cephcsi/cephcsi@sha256:678dbf3dada23438ae101175e7ce893565982b6ff2362b19c7c4f13254446663
Image: quay.io/cephcsi/cephcsi:canary
Image ID: docker-pullable://quay.io/cephcsi/cephcsi@sha256:678dbf3dada23438ae101175e7ce893565982b6ff2362b19c7c4f13254446663
Normal Pulling 19m kubelet Pulling image "quay.io/cephcsi/cephcsi:canary"
Normal Pulled 16m kubelet Successfully pulled image "quay.io/cephcsi/cephcsi:canary"
Normal Pulled 16m kubelet Container image "quay.io/cephcsi/cephcsi:canary" already present on

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working Priority-0 highest priority issue Release-2.1.0 ReleaseBlocker ReleaseBlocker PRs/Issues..etc
Projects
None yet
Development

Successfully merging a pull request may close this issue.

8 participants