-
Notifications
You must be signed in to change notification settings - Fork 1.6k
Cannot provision EFS volumes with efs-provisioner #953
Comments
please try giving it read/write permissions over endpoints in kube-system https://github.com/kubernetes-incubator/external-storage/pull/957/files#diff-fea5b10aff1df5ab55aa2de4bfe1260cR33 by creating a Role & Rolebinding with subjects.namespace set to kube-system this is a new development and it might change again very soon, sorry for the lack of docs. I'll try to update the helm chart, I am not sure where it lives. Alternatively we could lock the helm chart to a slightly older version for the time being. |
The chart is here: https://github.com/helm/charts/tree/master/stable/efs-provisioner But the RBAC resources provided in that diff do not address the issue. |
Having the same issue as reported by rafaelmagu |
I have overriden the Helm chart's |
@rafaelmagu. @DigitalAssembly please see: #964 |
Aha. I just stumbled across this. I do believe that our kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: efs-provisioner-runner
rules:
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["list", "watch", "create", "update", "patch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: run-efs-provisioner
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: efs-provisioner-runner
subjects:
- kind: ServiceAccount
name: efs-provisioner
namespace: kube-system
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-efs-provisioner
rules:
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-efs-provisioner
subjects:
- kind: ServiceAccount
name: efs-provisioner
namespace: kube-system
roleRef:
kind: Role
name: leader-locking-efs-provisioner
apiGroup: rbac.authorization.k8s.io And still:
Did I miss something? |
Ah, interesting. The kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: efs-provisioner-runner
rules:
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["list", "watch", "create", "update", "patch"]
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["get", "list", "watch", "create", "update", "patch"] Happy now. |
@deitch This fixed my issue. Thanks. |
Happy to help @dannyvargas23 . |
Yup, as @wongma7 said, adding the namespace to the ClusterRoleBinding fixed the issue for me. If you're using the default manifest file given here - https://github.com/kubernetes-incubator/external-storage/blob/master/aws/efs/deploy/manifest.yaml Simply change the ClusterRoleBinding section to this:
|
Hi @wongma7 @rafaelmagu, there is a fix in progress for the official Helm chart, to add support for the Endpoints access required by v2.0.0 (thanks @kppullin 👏). helm/charts#9127 |
The problem for me, was that the manifest example doesn't include a reference to a service account. See the This took 8 hours to solve. 🤦♂️ |
It seems that the manifest.yaml is also missing the serviceAccount bit in the specs, which leads to the pod trying to use the default user, thus failing with RBAC issues with the error
To fix this, you just need to change spec:
containers:
- name: efs-provisioner
image: quay.io/external_storage/efs-provisioner:latest to spec:
serviceAccount: efs-provisioner
containers:
- name: efs-provisioner
image: quay.io/external_storage/efs-provisioner:latest in the manifest.yaml (after creating the service account in the first place as mentioned in the previous comment |
Same as the two above comments—after applying the additional rules in the PR (https://github.com/helm/charts/pull/9127/files#diff-1f3ae64e932358240df168628073a894R25), I was able to start binding mounts. Only 3 hours wasted here, but quite annoying nonetheless. |
Any updates on this? I'd like to use Helm for this deployment :) |
@AndresPineros good news, the patch finally got merged yesterday |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Closed by helm/charts#9127 |
I got:
What fixed it for me was: Add a service account to rbac.yaml
then updated the manifest.yaml deployment to contain the service account explicitly:
|
I recently deployed efs-provisioner with the stable Helm chart to the
kube-system
namespace. However, it is failing to provision a PVC in another namespace. Is this expected? The docs don't mention this limitation.The text was updated successfully, but these errors were encountered: