-
Notifications
You must be signed in to change notification settings - Fork 229
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Better RBAC #624
Comments
so the current helm chart assigns all these rights to all 3 deployments (keda operator, webhook and metrics adapter/server):
also some of the entries are pretty wild to my taste:
|
point 5. is definitely a typo |
Thanks for tackling this, is it possible to list here potential fields that can be deprecated in the future and are present only for backwards compatibility( if there are any of course)? |
previously, there was only one section in helm's it looked like this:
(link) This whole section can be deprecated and replaced with:
this is the only "breaking"/deprecating work I've done (it's not breaking, there is fallback included, but you know what i mean). Other changes were additive with sane defaults (no-ops by default) |
Currently, there is one service account shared across all 3 deployments, even though they need only a sub-set of those RBAC rules. Also it's not possible to set rbac per namespace when running KEDA together with the
WATCH_NAMESPACE
not being equal to""
.here are some of my thoughts:
I’d like to achieve a fine-tuned RBAC settings using the helm values as the input. So that we can change the values that drives the keda’s installation on our side in kedify agent, render the yaml template and just re-apply based on what rbac settings will be required in the dashboard.
here are some flaws that I found in the current helm chart:
currently, there is a 1 clusterrole and 1 clusterbinding that gives very wide rights for the service account that is then used by the KEDA operator, the same service account is then also reused by the webhook deployment (although webook clearly doesn’t need 99% of those rights)
the cluster role is too wide, giving the operator basically almost cluster admin, for instance there is a read only rule for anything anywhere
watchNamespace
is not being used anyhow, no matter if we configure it to an empty string (denoting the cluster-wide keda mode) or a given namespace, the cluster role is appliedscattered rbac options across different section in
values.yaml
(.Values.rbac
vs.Values.permissions
). Can we afford breaking the backward compatibility here and merge it under.Values.rbac
?this looks like a typo to me, there is no standard resource called external , maybe it should have been this?
Some ideas:
One can mix a cluster role with (normal - not a clusterrolebinding). This way, we can reuse the same role (the clusterrole) and apply it only to a namespace using the RoleBinding ) this will reduce the number of manifest the helm chart will generate:
1
cluster rolen
role binding per each namespace specified inwatchNamespace
Restrict the access by white-listing:
scaleTargetRef
role.rules.resourceNames
unfortunately this doesn't work forcreate
verb)The text was updated successfully, but these errors were encountered: