Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Kubernetes extension - Configurable role policies #21595

Closed

Conversation

heubeck
Copy link
Contributor

@heubeck heubeck commented Nov 21, 2021

Solves #21553

@heubeck heubeck changed the title Kubernetes/additional role policies Kubernetes extension - Configurable role policies Nov 21, 2021
@geoand geoand requested a review from iocanel November 21, 2021 20:16
@heubeck heubeck force-pushed the kubernetes/AdditionalRolePolicies branch from 10114af to da7eb2f Compare November 22, 2021 11:27
@geoand
Copy link
Contributor

geoand commented Nov 26, 2021

@iocanel can you take a look at this one?

Thanks

Copy link
Contributor

@iocanel iocanel left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I gave a quick glimpse and I have some comments for discussion.

Note: These are topics for discussion and not change requests.

  • The pull request assumes that only one role may be added and bound to the application. While this is not necessarily a bad idea it might be a bit too restrictive.

  • Should we cover cases were we bind existing roles? This doesn't seem possible at the moment.

  • The configuration target introduce in this PR, is pretty much the PolicyRule so properties like cluster-wide appear on logical level below their actual target Role. What happens when different policies have conflicting cluster-wide options? Who wins?

  • I am not a huge fan of having to specify a name for policy rules, that are actually unnamed. I am wondering if we should generate Role with a single policy each and use that name to specify the role name: Maybe something like:

quarkus.kubernetes.rbac.roles.crd-viewer.api-groups=apiextensions.k8s.io
quarkus.kubernetes.rbac.roles.crd-viewer.resources=crd
quarkus.kubernetes.rbac.roles.crd-viewer.verbs=get
quarkus.kubernetes.rbac.roles.crd-viewer.cluster-wide=true
quarkus.kubernetes.rbac.roles.crd-viewer.binding=my-app-crd-viewer

In this example quarkus.kubernetes.rbac.roles.crd-viewer.binding=my-app-crd-viewer could be possibly used on its own when it comes to provided roles (?)

@@ -0,0 +1,40 @@
import io.dekorate.utils.Serialization
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For the biggest part of the kubernetes extension we use standard tests. We only add invoker tests to test aspects that are specific to build tool (e.g. passing options to the maven build etc).

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1. Let's try and not add invoker tests unless absolutely necessary

@heubeck
Copy link
Contributor Author

heubeck commented Nov 26, 2021

I gave a quick glimpse and I have some comments for discussion.

Note: These are topics for discussion and not change requests.

Happy to discuss :D

* The pull request assumes that only one role may be added and bound to the application. While this is not necessarily a bad idea it might be a bit too restrictive.

In general, I like your idea, but would put the following under consideration:

  • Configuration would become even more complex, as every role has many policies, so it would be a map of maps
  • What would we consider as reasonable for "application configuration" - my original use-case is operator project, that requires some more rights. As there's already a Role generated, I thought simply about enhancing this, seemed to be consistent.
* Should we cover cases were we bind existing roles? This doesn't seem possible at the moment.

Totally makes sense! After we aligned all points, lets define the configuration format and I'll refactor based on that.

* The configuration target introduce in this PR, is pretty much the `PolicyRule` so properties like `cluster-wide` appear on logical level below their actual target `Role`. What happens when different policies have conflicting `cluster-wide` options? Who wins?

Maybe I don't get the point: All cluster-wide = true policies are listed as rules in a single ClusterRole, same with all cluster-wide = false in a single Role. Stupid rule combinations will just be listed togehter, don't know what kubernetes would say. What do you mean with "conflicting"?

* I am not a huge fan of having to specify a name for policy rules, that are actually unnamed. I am wondering if we should generate `Role` with a single policy each and use that name to specify the role name: Maybe something like:
quarkus.kubernetes.rbac.roles.crd-viewer.api-groups=apiextensions.k8s.io
quarkus.kubernetes.rbac.roles.crd-viewer.resources=crd
quarkus.kubernetes.rbac.roles.crd-viewer.verbs=get
quarkus.kubernetes.rbac.roles.crd-viewer.cluster-wide=true
quarkus.kubernetes.rbac.roles.crd-viewer.binding=my-app-crd-viewer

As stated above, it would have to be a map of maps, or is your suggestion, to build Roles with just a single PolicyRule and bind all of them? Could be an option as well, but feels also not very beautiful.

In this example quarkus.kubernetes.rbac.roles.crd-viewer.binding=my-app-crd-viewer could be possibly used on its own when it comes to provided roles (?)

Probably.

@iocanel
Copy link
Contributor

iocanel commented Nov 29, 2021

Checked the implementation again and it's clearer to me what you are doing with cluster-wide and it's not that much of a problem. Still, I prefer if we can work on a config model that makes things clearer.

I need to look at it a bit closer and think about it

@heubeck
Copy link
Contributor Author

heubeck commented Nov 29, 2021

Checked the implementation again and it's clearer to me what you are doing with cluster-wide and it's not that much of a problem. Still, I prefer if we can work on a config model that makes things clearer.

I need to look at it a bit closer and think about it

Happy to hear your opinion and ideas.

I also thought about if maybe the only senseful option would be, to only specify which ServiceAccount to use.
Maybe it's not intentional, to manage somehow unrelated Kubernetes resources via an application configuration.
We could assume, that (Cluster)Roles and (Cluster)RoleBindings are externally managed and no non-detectable RBAC resources are generated beside the obviously required ones (secret-view...).

@iocanel
Copy link
Contributor

iocanel commented Dec 16, 2021

@geoand I'd like an extra view on this, cause I have @heubeck blocked and I'd like use to proceed with this.

@geoand
Copy link
Contributor

geoand commented Dec 16, 2021

I'll have a look later today or tomorrow

@iocanel
Copy link
Contributor

iocanel commented Dec 16, 2021

I'll have a look later today or tomorrow

Thanks

@heubeck
Copy link
Contributor Author

heubeck commented Dec 16, 2021

@geoand I'd like an extra view on this, cause I have @heubeck blocked and I'd like use to proceed with this.

Thank you, but no issue - in fact, I'm not even sure, which variant I'd like best.
Maybe you both point out your opinion on what would make sense.

Copy link
Contributor

@geoand geoand left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I added a tiny doc comment and a comment about the tests.

Otherwise, LGTM

quarkus.kubernetes.policies.crd.verbs=get,update
----

<1> `ext` in this example is only a grouping key for identifying a conjugated `PolicyRule` in the `Role` manifest, it's solely used in the configuration file.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
<1> `ext` in this example is only a grouping key for identifying a conjugated `PolicyRule` in the `Role` manifest, it's solely used in the configuration file.
<1> `ext` in this example is only a grouping key for identifying a conjugated `PolicyRule` in the `Role` manifest, its solely used in the configuration file.

Copy link
Contributor Author

@heubeck heubeck Dec 16, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

in fact, I wanted to say, that ... the grouping key ... (it) is solely used...

@@ -0,0 +1,40 @@
import io.dekorate.utils.Serialization
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1. Let's try and not add invoker tests unless absolutely necessary

@iocanel
Copy link
Contributor

iocanel commented Jan 11, 2022

Let's try to move this forward again.

I propose to start with the following changes.

  1. Use the quarkus.kubernetes.rbac. prefix.
  2. Add support for binding existing roles: `quarkus.kubernetes.rbac.role-binding.=.
  3. Add support for binding existing cluster roles. As above but with cluster-role-binding.

I think that points [1,2,3] is something we both agree on. So, I think that these should be added to PR anyway.

The hard part is how is how we define roles ourselves (if we need to).
The way I see things:

  • This is could be considered slightly out of scope as the role policies themselves are not exactly the application concern.
  • We can always throw the roles under src/main/kubernetes in yaml instead of defining them via properties.
  • So, if we allow users to define roles via config, it must be:
    • Very simple
    • Clear
    • and flexible

So we could possibly go with something like this:

Option A: Single policy Role

quarkus.kubernetes.rbac.roles.crd-viewer.api-groups=apiextensions.k8s.io
quarkus.kubernetes.rbac.roles.crd-viewer.resources=crd
quarkus.kubernetes.rbac.roles.crd-viewer.verbs=get

Option B: Multiplicity roles:

quarkus.kubernetes.rbac.roles.crd-viewer.policy."apiextensions.k8s.io/crd"=get,delete,etc

I am ok with both approaches, with a small preference to Option B.

I suggest moving forward with [1,2,3] first and then work on the hard part.

@iocanel
Copy link
Contributor

iocanel commented Jan 11, 2022

@heubeck added some comments on how I think we should proceed ^

@heubeck
Copy link
Contributor Author

heubeck commented Jan 11, 2022

@heubeck added some comments on how I think we should proceed ^

Hey @iocanel,

completely agree with you, 1,2,3 are perfectly fine and definite valuable.
In the meanwhile, I even think, it would be best to provide nothing further, neither A nor B - as this isn't application related and just a pass-through. Guess everyone would prefer to externally manage roles.

It may take some time for me to implement this, as I'm very busy right now - one suggestion:
using
quarkus.kubernetes.rbac.[cluster]role-bindings (plural)
intead of
quarkus.kubernetes.rbac.[cluster]role-binding (singular)
as list and generate one *-binding for each value, WDYT?

@heubeck heubeck marked this pull request as draft January 11, 2022 10:13
@iocanel
Copy link
Contributor

iocanel commented Feb 21, 2022

@heubeck added some comments on how I think we should proceed ^

Hey @iocanel,

completely agree with you, 1,2,3 are perfectly fine and definite valuable. In the meanwhile, I even think, it would be best to provide nothing further, neither A nor B - as this isn't application related and just a pass-through. Guess everyone would prefer to externally manage roles.

It may take some time for me to implement this, as I'm very busy right now - one suggestion: using quarkus.kubernetes.rbac.[cluster]role-bindings (plural) intead of quarkus.kubernetes.rbac.[cluster]role-binding (singular) as list and generate one *-binding for each value, WDYT?

Agree

@heubeck
Copy link
Contributor Author

heubeck commented Sep 16, 2022

Will restart that effort at a later point in time.
thanks for your support!

@heubeck heubeck closed this Sep 16, 2022
@quarkus-bot quarkus-bot bot added the triage/invalid This doesn't seem right label Sep 16, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants