Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Not able to limit CRD permissions of the operator #245

Closed
praymann opened this issue Feb 28, 2020 · 3 comments
Closed

Not able to limit CRD permissions of the operator #245

praymann opened this issue Feb 28, 2020 · 3 comments
Labels

Comments

@praymann
Copy link

Expected behaviour

When deploying the operator, I should be able to use RBAC and limit the cluster-level access required to Custom Resource Definitions to only the required/related Resource with:

  - apiGroups:
      - databases.spotahome.com
    resources:
      - redisfailovers
    verbs:
      - "*"
  - apiGroups:
      - apiextensions.k8s.io
    resources:
      - customresourcedefinitions
    resourceNames:
      - redisfailovers.databases.spotahome.com
    verbs:
      - "*"

Actual behaviour

The operator enters CrashLoopBackOff with:

time="2020-02-28T22:07:17Z" level=info msg="Listening on :9710 for metrics exposure" src="asm_amd64.s:1337"
time="2020-02-28T22:07:17Z" level=warning msg="controller name not provided, it should have a name, fallback name to: *redisfailover.RedisFailoverHandler" controller=redisfailover operator=redis-operator src="generic.go:64"
time="2020-02-28T22:07:17Z" level=info msg="starting operator" operator=redis-operator src="main.go:87"
time="2020-02-28T22:07:17Z" level=error msg="Error received: error creating crd redisfailovers.databases.spotahome.com: customresourcedefinitions.apiextensions.k8s.io is forbidden: User \"system:serviceaccount:operators:redisoperator\" cannot create resource \"customresourcedefinitions\" in API group \"apiextensions.k8s.io\" at the cluster scope, exiting..." src="main.go:124"
time="2020-02-28T22:07:17Z" level=info msg="Stopping everything, waiting 5s..." src="main.go:101"
error executing: error creating crd redisfailovers.databases.spotahome.com: customresourcedefinitions.apiextensions.k8s.io is forbidden: User "system:serviceaccount:operators:redisoperator" cannot create resource "customresourcedefinitions" in API group "apiextensions.k8s.io" at the cluste

yet:

kubectl auth can-i create customresourcedefinitions/redisfailovers.databases.spotahome.com --as=system:serviceaccount:operators:redisoperator -n operators

reports a yes.

Steps to reproduce the behaviour

  • Create an operators namespace
  • Create a ClusterRole for use in namespaces like so:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: redisoperator-namespaced
  namespace: operators
  labels:
    app: redisoperator
rules:
  - apiGroups:
      - ""
    resources:
      - pods
      - services
      - endpoints
      - events
      - configmaps
    verbs:
      - "*"
  - apiGroups:
      - ""
    resources:
      - secrets
    verbs:
      - "get"
  - apiGroups:
      - apps
    resources:
      - deployments
      - statefulsets
    verbs:
      - "*"
  - apiGroups:
      - policy
    resources:
      - poddisruptionbudgets
    verbs:
      - "*"
  • Create a ClusterRole for use by the operator like so:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: redisoperator-global
  namespace: operators
  labels:
    app: redisoperator
rules:
  - apiGroups:
      - databases.spotahome.com
    resources:
      - redisfailovers
    verbs:
      - "*"
  - apiGroups:
      - apiextensions.k8s.io
    resources:
      - customresourcedefinitions
    resourceNames:
      - redisfailovers.databases.spotahome.com
    verbs:
      - "*"
  • Apply the ServiceAccount from all-redis-operator-resources.yaml into the operators namespace
  • Bind the global ClusterRole to the ServiceAccount with:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: redisoperator-global
  namespace: operators
  labels:
    app: redisoperator
subjects:
- kind: ServiceAccount
  name: redisoperator
  namespace: operators
roleRef:
  kind: ClusterRole
  name: redisoperator-global
  apiGroup: rbac.authorization.k8s.io
  • Bind the namespaced ClusterRole into an example namespace with a RoleBinding with:
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: redisoperator
  namespace: default
  labels:
    app: redisoperator
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: redisoperator
subjects:
  - kind: ServiceAccount
    name: redisoperator
    namespace: operators
  • Apply the Deployment from all-redis-operator-resources.yaml into the operators namespace

Environment

How are the pieces configured?

  • Redis Operator = v1.0.0
  • Kubernetes = 1.13.2
  • RBAC is enabled

Logs

Same as above but with -debug as a flag:

time="2020-02-28T23:33:31Z" level=debug msg="debug mode activated" src="main.go:124"
time="2020-02-28T23:33:31Z" level=info msg="Listening on :9710 for metrics exposure" src="asm_amd64.s:1337"
time="2020-02-28T23:33:31Z" level=warning msg="controller name not provided, it should have a name, fallback name to: *redisfailover.RedisFailoverHandler" controller=redisfailover operator=redis-operator src="generic.go:64"
time="2020-02-28T23:33:31Z" level=info msg="starting operator" operator=redis-operator src="main.go:87"
time="2020-02-28T23:33:31Z" level=error msg="Error received: error creating crd redisfailovers.databases.spotahome.com: customresourcedefinitions.apiextensions.k8s.io is forbidden: User \"system:serviceaccount:operators:redisoperator\" cannot create resource \"customresourcedefinitions\" in API group \"apiextensions.k8s.io\" at the cluster scope, exiting..." src="main.go:124"
time="2020-02-28T23:33:31Z" level=info msg="Stopping everything, waiting 5s..." src="main.go:101"
error executing: error creating crd redisfailovers.databases.spotahome.com: customresourcedefinitions.apiextensions.k8s.io is forbidden: User "system:serviceaccount:operators:redisoperator" cannot create resource "customresourcedefinitions" in API group "apiextensions.k8s.io" at the cluster scope

Examples of using kubectl auth can-i:

praymann@localhost ~> k auth can-i create customresourcedefinitions/redisfailovers.databases.spotahome.com --as=system:serviceaccount:operators:redisoperator -n operators
yes
praymann@localhost ~> k auth can-i create customresourcedefinitions --as=system:serviceaccount:operators:redisoperator -n operators
no
praymann@localhost ~> k auth can-i delete customresourcedefinitions --as=system:serviceaccount:operators:redisoperator -n operators
no
praymann@localhost ~> k auth can-i update customresourcedefinitions --as=system:serviceaccount:operators:redisoperator -n operators
no
praymann@localhost ~> k auth can-i update customresourcedefinitions/redisfailovers.databases.spotahome.com --as=system:serviceaccount:operators:redisoperator -n operators
yes
praymann@localhost ~> k auth can-i delete customresourcedefinitions/redisfailovers.databases.spotahome.com --as=system:serviceaccount:operators:redisoperator -n operators
yes
praymann@localhost ~> k auth can-i list customresourcedefinitions/redisfailovers.databases.spotahome.com --as=system:serviceaccount:operators:redisoperator -n operators
yes
praymann@localhost ~> k auth can-i get customresourcedefinitions/redisfailovers.databases.spotahome.com --as=system:serviceaccount:operators:redisoperator -n operators
yes
praymann@localhost ~> k auth can-i watch customresourcedefinitions/redisfailovers.databases.spotahome.com --as=system:serviceaccount:operators:redisoperator -n operators
yes
@psalaberria002
Copy link

psalaberria002 commented Mar 16, 2020

Were you able to fix this?

Giving full permissions on all crds seems excessive: https://github.com/spotahome/redis-operator/blob/v1.0.0/charts/redisoperator/templates/rbac.yaml#L25-L30

@github-actions
Copy link

This issue is stale because it has been open for 45 days with no activity.

@github-actions github-actions bot added the stale label Jan 14, 2022
@github-actions
Copy link

This issue was closed because it has been inactive for 14 days since being marked as stale.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants