-
Notifications
You must be signed in to change notification settings - Fork 163
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support Any Additional resource #416
Comments
@oliverbaehler thanks for suggesting this improvement. I think this will require some sort of refactoring of the code. Let's to see what @prometherion says. |
I'm a bit late on this, thanks for the patience! This feature would be awesome and I couldn't agree more, despite some limitations I'd like to evaluate with the community. We have to keep in mind that CRDs are, essentially, OpenAPI v3 specifications: we got a schema that must stick to it and that's great, I love it when everything is strictly typed since it's less error-prone and safer, especially at compile time. I'm really happy with the unversioned client set offered by The dynamic client setup looks easy as pie, also considering we already have whatever is needed: I'm more concerned regarding the naming of the resources that must be prefixed to avoid collision (especially for the cluster-scoped resources) and the set of the Namespace field. tl;dr; we would be required to deal with A lot of words and a simple question: should we implement this on our own or, rather, take advantage of HNC? We would just need to point the Tenant to a template Namespace and that controller would start replicating the resources without hacking so much our code-base, although adding a tough dependency since an external component would be required, not sure if we can grab their code and start it using our Manager, need to be investigated. |
Another Solution to this would be just using generate Functionalities of e.g Kyverno (https://kyverno.io/docs/writing-policies/generate/). You could write policies which would generate the resources based on namespace labels. As far as I am concerned this would also cover the use-case. Now you would have to assume that kyverno is already installed and therefor would have kinda a dependency. But I would also say that such use cases occur in complexer setup where the probability is high, that such policy management is already deployed. This would not required to hack your codebase and also would not require a template namespace you mentioned. But it would mostly not fit in the incrementing name convention etc. But i don't know if that's a big thing. Actually Kyverno implements a https://kyverno.io/docs/writing-policies/generate/#clone-a-configmap-and-propagate-changes I am mentioning that because it could make sense to not inflate the capsule project to a all use-case resolving solution. But that's up to you guys if you think that feature should be covered by capsule. Some Examples. Let's say you would want to add a ciliumPolicy for each Namespace that's part of the capsule policy.yaml
i didn't test it, but just that you get the idea of what i am talking. |
@prometherion Kiosk has this feature with very powerful templating and helm: https://github.com/loft-sh/kiosk#5-working-with-templates There is no validation: |
@prometherion Still the same thoughts on this? I mean we could start a very simple feature where it just renders all the resources and validation for the basic kubernetes fields (kind,metadata) and then go from there |
After some thoughts i may have an extra idea for this feature. What if we would also add a selector, with which we could target only specific namespaces which create the additional resource (if no selector set create in all) So the an extra resource would look something like this
This might also help with building a tenant based rbac, since you could assign rolebinding based on the namespace's attributes. I don't know if that would help your team @MaxFedotov. It's just an idea, not yet exactly sure how much sense it makes. |
@oliverbaehler JFI there's a feature request (#525 ) that is pretty similar to this. Could you provide feedback on that? |
This attempts to address projectcapsule#525 which is related to projectcapsule#416. problem this commit is trying to solve ---- ``` Alice would like to create several resources on each of their Namespaces. Upon the creation of each Namespace, they have to create the desired resources on each Namespace ``` On the above linked tickets there are two proposed approaches approach 01 ---- Create a new resource `TenantResource` something like this ```yaml apiVersion: capsule.clastix.io/v1beta2 kind: TenantResource metadata: name: database namespace: solar-management spec: resyncPeriod: 60s additionalResources: namespaceSelector: matchLabels: tier: one items: - apiVersion: presslabs.io/v1beta1 kind: MySQL spec: foo: bar clusterRoles: namespaceSelector: matchLabels: tier: one items: - name: database-admin subjects: - kind: Group name: developers ``` approach 02 ----- Extend `Tenant` to support addtional resources ```yaml apiVersion: capsule.clastix.io/v1beta1 kind: Tenant metadata: name: gas spec: additionalResources: - apiVersion: "cilium.io/v2" kind: CiliumNetworkPolicy metadata: name: "l3-rule" spec: endpointSelector: matchLabels: role: backend ingress: - fromEndpoints: - matchLabels: role: frontend ``` This commit implements approach `02` due to the following reasons - The namespaces belong to the tenant already, extra `TenantResource` seems redundant given that the lifecycle of the additional resources are tied to the `Tenant`. How does the crd look like now ? ---- ```yaml apiVersion: capsule.clastix.io/v1beta1 kind: Tenant metadata: name: oil spec: resyncPeriod: 60s additionalResources: namespaceSelector: matchLabels: tier: one items: - | apiVersion: v1 kind: Pod metadata: name: nginx labels: app.kubernetes.io/name: proxy spec: containers: - name: nginx image: nginx:11.14.2 ports: - containerPort: 80 name: http-web-svc - | apiVersion: v1 kind: Service metadata: name: nginx-service labels: app.kubernetes.io/name: proxy spec: selector: app.kubernetes.io/name: proxy ports: - name: name-of-service-port protocol: TCP port: 80 targetPort: http-web-svc owners: - name: alice kind: User ``` The difference with the proposed crd is, items are strings of k8s objects in yaml format. I ran through decoding issues when I tried to use `Unstructured` so I decided to settle on strings instead. How it works ? ---- We search for namespaces specified by `namespaceSelector` that are owned by the tenant. For each matched namespace, apply all resources specified in `additionalResources.items` on any error, reschedule next reconciliation by `resyncPeriod`. What is missing ? ---- - [ ] Tests - [ ] What happens when a tenant is deleted ? - [ ] What happens when a tenant is deleted ? - [ ] Does `additionalRoleBindings` cover for `clusterRoles` defined in approach 01? I will wait for feedback/discussion on how to proceed from here.
This attempts to address projectcapsule#525 which is related to projectcapsule#416. problem this commit is trying to solve ---- ``` Alice would like to create several resources on each of their Namespaces. Upon the creation of each Namespace, they have to create the desired resources on each Namespace ``` On the above linked tickets there are two proposed approaches approach 01 ---- Create a new resource `TenantResource` something like this ```yaml apiVersion: capsule.clastix.io/v1beta2 kind: TenantResource metadata: name: database namespace: solar-management spec: resyncPeriod: 60s additionalResources: namespaceSelector: matchLabels: tier: one items: - apiVersion: presslabs.io/v1beta1 kind: MySQL spec: foo: bar clusterRoles: namespaceSelector: matchLabels: tier: one items: - name: database-admin subjects: - kind: Group name: developers ``` approach 02 ----- Extend `Tenant` to support addtional resources ```yaml apiVersion: capsule.clastix.io/v1beta1 kind: Tenant metadata: name: gas spec: additionalResources: - apiVersion: "cilium.io/v2" kind: CiliumNetworkPolicy metadata: name: "l3-rule" spec: endpointSelector: matchLabels: role: backend ingress: - fromEndpoints: - matchLabels: role: frontend ``` This commit implements approach `02` due to the following reasons - The namespaces belong to the tenant already, extra `TenantResource` seems redundant given that the lifecycle of the additional resources are tied to the `Tenant`. How does the crd look like now ? ---- ```yaml apiVersion: capsule.clastix.io/v1beta1 kind: Tenant metadata: name: oil spec: resyncPeriod: 60s additionalResources: namespaceSelector: matchLabels: tier: one items: - | apiVersion: v1 kind: Pod metadata: name: nginx labels: app.kubernetes.io/name: proxy spec: containers: - name: nginx image: nginx:11.14.2 ports: - containerPort: 80 name: http-web-svc - | apiVersion: v1 kind: Service metadata: name: nginx-service labels: app.kubernetes.io/name: proxy spec: selector: app.kubernetes.io/name: proxy ports: - name: name-of-service-port protocol: TCP port: 80 targetPort: http-web-svc owners: - name: alice kind: User ``` The difference with the proposed crd is, items are strings of k8s objects in yaml format. I ran through decoding issues when I tried to use `Unstructured` so I decided to settle on strings instead. How it works ? ---- We search for namespaces specified by `namespaceSelector` that are owned by the tenant. For each matched namespace, apply all resources specified in `additionalResources.items` on any error, reschedule next reconciliation by `resyncPeriod`. What is missing ? ---- - [ ] Tests - [ ] What happens when a tenant is deleted ? - [ ] What happens when a tenant is deleted ? - [ ] Does `additionalRoleBindings` cover for `clusterRoles` defined in approach 01? I will wait for feedback/discussion on how to proceed from here.
This attempts to address projectcapsule#525 which is related to projectcapsule#416. problem this commit is trying to solve ---- ``` Alice would like to create several resources on each of their Namespaces. Upon the creation of each Namespace, they have to create the desired resources on each Namespace ``` On the above linked tickets there are two proposed approaches approach 01 ---- Create a new resource `TenantResource` something like this ```yaml apiVersion: capsule.clastix.io/v1beta2 kind: TenantResource metadata: name: database namespace: solar-management spec: resyncPeriod: 60s additionalResources: namespaceSelector: matchLabels: tier: one items: - apiVersion: presslabs.io/v1beta1 kind: MySQL spec: foo: bar clusterRoles: namespaceSelector: matchLabels: tier: one items: - name: database-admin subjects: - kind: Group name: developers ``` approach 02 ----- Extend `Tenant` to support addtional resources ```yaml apiVersion: capsule.clastix.io/v1beta1 kind: Tenant metadata: name: gas spec: additionalResources: - apiVersion: "cilium.io/v2" kind: CiliumNetworkPolicy metadata: name: "l3-rule" spec: endpointSelector: matchLabels: role: backend ingress: - fromEndpoints: - matchLabels: role: frontend ``` This commit implements approach `02` due to the following reasons - The namespaces belong to the tenant already, extra `TenantResource` seems redundant given that the lifecycle of the additional resources are tied to the `Tenant`. How does the crd look like now ? ---- ```yaml apiVersion: capsule.clastix.io/v1beta1 kind: Tenant metadata: name: oil spec: resyncPeriod: 60s additionalResources: namespaceSelector: matchLabels: tier: one items: - | apiVersion: v1 kind: Pod metadata: name: nginx labels: app.kubernetes.io/name: proxy spec: containers: - name: nginx image: nginx:11.14.2 ports: - containerPort: 80 name: http-web-svc - | apiVersion: v1 kind: Service metadata: name: nginx-service labels: app.kubernetes.io/name: proxy spec: selector: app.kubernetes.io/name: proxy ports: - name: name-of-service-port protocol: TCP port: 80 targetPort: http-web-svc owners: - name: alice kind: User ``` The difference with the proposed crd is, items are strings of k8s objects in yaml format. I ran through decoding issues when I tried to use `Unstructured` so I decided to settle on strings instead. How it works ? ---- We search for namespaces specified by `namespaceSelector` that are owned by the tenant. For each matched namespace, apply all resources specified in `additionalResources.items` on any error, reschedule next reconciliation by `resyncPeriod`. What is missing ? ---- - [ ] Tests - [ ] What happens when a tenant is deleted ? - [ ] What happens when a tenant is deleted ? - [ ] Does `additionalRoleBindings` cover for `clusterRoles` defined in approach 01? I will wait for feedback/discussion on how to proceed from here.
Describe the feature
I would like to have a generic approach on resources, which are created for each new namespace belonging to a tenant. The same behavior that
.spec.additionalRoleBindings
and.spec.networkPolicies
already implement, but for any kubernetes resource.As of why:
We are currently moving to Cilium Network Policies, and we would require each namespace to implement those policies (same could also apply for calico resources or really anything else). So instead of writing an own workaround, it would make sense to be able to declare something like this:
What would the new user story look like?
How would the new interaction with Capsule look like? E.g.
Expected behavior
A clear and concise description of what you expect to happen.
When I have defined
.spec.additionalResources
on a tenant, those resources are created for each namespace that is assigned or created in that tenant.The text was updated successfully, but these errors were encountered: