Skip to content
This repository has been archived by the owner on Oct 21, 2020. It is now read-only.

Cannot provision EFS volumes with efs-provisioner #953

Closed
rafaelmagu opened this issue Aug 21, 2018 · 19 comments
Closed

Cannot provision EFS volumes with efs-provisioner #953

rafaelmagu opened this issue Aug 21, 2018 · 19 comments
Labels
area/aws/efs lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale.

Comments

@rafaelmagu
Copy link

I recently deployed efs-provisioner with the stable Helm chart to the kube-system namespace. However, it is failing to provision a PVC in another namespace. Is this expected? The docs don't mention this limitation.

E0821 03:12:21.077132       1 leaderelection.go:234] error retrieving resource lock kube-system/example.com-aws-efs: endpoints "example.com-aws-efs" is forbidden: User "system:serviceaccount:kube-system:efs-provisioner" cannot get endpoints in the namespace "kube-system"
@wongma7
Copy link
Contributor

wongma7 commented Aug 24, 2018

please try giving it read/write permissions over endpoints in kube-system https://github.com/kubernetes-incubator/external-storage/pull/957/files#diff-fea5b10aff1df5ab55aa2de4bfe1260cR33 by creating a Role & Rolebinding with subjects.namespace set to kube-system

this is a new development and it might change again very soon, sorry for the lack of docs.

I'll try to update the helm chart, I am not sure where it lives. Alternatively we could lock the helm chart to a slightly older version for the time being.

@rafaelmagu
Copy link
Author

The chart is here: https://github.com/helm/charts/tree/master/stable/efs-provisioner

But the RBAC resources provided in that diff do not address the issue.

@crsantini
Copy link

Having the same issue as reported by rafaelmagu

@rafaelmagu
Copy link
Author

I have overriden the Helm chart's image.tag to use v1.0.0-k8s1.10.

@ramene
Copy link

ramene commented Sep 3, 2018

@rafaelmagu. @DigitalAssembly please see: #964

@deitch
Copy link

deitch commented Sep 5, 2018

Aha. I just stumbled across this. I do believe that our Role and ClusterRole (and *Binding of course) are set correctly, but still getting it.

kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: efs-provisioner-runner
rules:
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["list", "watch", "create", "update", "patch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: run-efs-provisioner
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: efs-provisioner-runner
subjects:
- kind: ServiceAccount
  name: efs-provisioner
  namespace: kube-system
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-efs-provisioner
rules:
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-efs-provisioner
subjects:
  - kind: ServiceAccount
    name: efs-provisioner
    namespace: kube-system
roleRef:
  kind: Role
  name: leader-locking-efs-provisioner
  apiGroup: rbac.authorization.k8s.io

And still:

E0905 13:43:48.445313       1 leaderelection.go:234] error retrieving resource lock kube-system/k8s.io-aws-efs: endpoints "k8s.io-aws-efs" is forbidden: User "system:serviceaccount:kube-system:efs-provisioner" cannot get endpoints in the namespace "kube-system"

Did I miss something?

@deitch
Copy link

deitch commented Sep 5, 2018

Ah, interesting. The Role and RoleBinding seem to be insufficient, you need them as a ClusterRole instead. I removed the Role* and updated ClusterRole to:

kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: efs-provisioner-runner
rules:
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["list", "watch", "create", "update", "patch"]
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]

Happy now.

@dannyvargas23
Copy link

@deitch This fixed my issue. Thanks.

@deitch
Copy link

deitch commented Sep 16, 2018

Happy to help @dannyvargas23 .

@ghost
Copy link

ghost commented Nov 1, 2018

Yup, as @wongma7 said, adding the namespace to the ClusterRoleBinding fixed the issue for me. If you're using the default manifest file given here - https://github.com/kubernetes-incubator/external-storage/blob/master/aws/efs/deploy/manifest.yaml

Simply change the ClusterRoleBinding section to this:

---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-efs-provisioner
subjects:
  - kind: ServiceAccount
    name: efs-provisioner
    namespace: YOUR_NAMESPACE
roleRef:
  kind: ClusterRole
  name: efs-provisioner-runner
  apiGroup: rbac.authorization.k8s.io
---

@whereisaaron
Copy link
Contributor

Hi @wongma7 @rafaelmagu, there is a fix in progress for the official Helm chart, to add support for the Endpoints access required by v2.0.0 (thanks @kppullin 👏). helm/charts#9127

@zimmertr
Copy link
Contributor

The problem for me, was that the manifest example doesn't include a reference to a service account. See the deployment.yml for an example of how to do this.

This took 8 hours to solve. 🤦‍♂️

@bunjiboys
Copy link

It seems that the manifest.yaml is also missing the serviceAccount bit in the specs, which leads to the pod trying to use the default user, thus failing with RBAC issues with the error

1 leaderelection.go:252] error retrieving resource lock my-namespace/example.com-aws-efs: endpoints "example.com-aws-efs" is forbidden: User "system:serviceaccount:my-namespace:default" cannot get endpoints in the namespace "my-namespace"

To fix this, you just need to change

    spec:
      containers:
        - name: efs-provisioner
          image: quay.io/external_storage/efs-provisioner:latest

to

    spec:
      serviceAccount: efs-provisioner
      containers:
        - name: efs-provisioner
          image: quay.io/external_storage/efs-provisioner:latest

in the manifest.yaml (after creating the service account in the first place as mentioned in the previous comment

@geerlingguy
Copy link
Contributor

Same as the two above comments—after applying the additional rules in the PR (https://github.com/helm/charts/pull/9127/files#diff-1f3ae64e932358240df168628073a894R25), I was able to start binding mounts. Only 3 hours wasted here, but quite annoying nonetheless.

@AndresPineros
Copy link

Any updates on this? I'd like to use Helm for this deployment :)

@whereisaaron
Copy link
Contributor

@AndresPineros good news, the patch finally got merged yesterday
helm/charts#9127

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 29, 2019
@rafaelmagu
Copy link
Author

Closed by helm/charts#9127

@gerritjvv
Copy link

I got:

E0802 21:02:38.177700 1 leaderelection.go:252] error retrieving resource lock default/example.com-aws-efs: endpoints "example.com-aws-efs" is forbidden: User "system:serviceaccount:default:default" cannot get resource "endpoints" in API group "" in the namespace "default"

What fixed it for me was:

Add a service account to rbac.yaml

apiVersion: v1
kind: ServiceAccount
metadata:
  name: efs-provisioner

then updated the manifest.yaml deployment to contain the service account explicitly:

kind: Deployment
apiVersion: extensions/v1beta1
metadata:
  name: efs-provisioner
spec:
  replicas: 1
  strategy:
    type: Recreate 
  template:
    metadata:
      labels:
        app: efs-provisioner
    spec:
      serviceAccountName: efs-provisioner

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
area/aws/efs lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale.
Projects
None yet
Development

No branches or pull requests