-
Notifications
You must be signed in to change notification settings - Fork 162
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Propagate resources across all Tenant Namespace using the TenantResource custom resource #525
Comments
Looks pretty good - would this also allow specifying pre-existing resources to be created in namespaces? Or would the spec of the item to be created in the namespaces have to be defined within I'm imagining this could also be used to solve this issue? #222 - my only concern is if we specify secrets on this spec (although maybe we can use a CRD to retrieve them from elsewhere) |
Sure thing, that's a great proposal and would certainly help with #416. For the selector it would be awesome if we could also use
Sorry if it's already implied with the |
@oliverbaehler we're going to support it using the
Referring to an external object could be tricky, mostly due to a precondition regarding its existence: what should we do if the referred object doesn't exist? Should we block reconciliation, or just log it out or broadcast event about it? Let's start the discussion about it: from the simplicity standpoint, referring bare resources in the We could mix the two scopes: if the object in the items array is a Could it work, @slushysnowman? |
This attempts to address projectcapsule#525 which is related to projectcapsule#416. problem this commit is trying to solve ---- ``` Alice would like to create several resources on each of their Namespaces. Upon the creation of each Namespace, they have to create the desired resources on each Namespace ``` On the above linked tickets there are two proposed approaches approach 01 ---- Create a new resource `TenantResource` something like this ```yaml apiVersion: capsule.clastix.io/v1beta2 kind: TenantResource metadata: name: database namespace: solar-management spec: resyncPeriod: 60s additionalResources: namespaceSelector: matchLabels: tier: one items: - apiVersion: presslabs.io/v1beta1 kind: MySQL spec: foo: bar clusterRoles: namespaceSelector: matchLabels: tier: one items: - name: database-admin subjects: - kind: Group name: developers ``` approach 02 ----- Extend `Tenant` to support addtional resources ```yaml apiVersion: capsule.clastix.io/v1beta1 kind: Tenant metadata: name: gas spec: additionalResources: - apiVersion: "cilium.io/v2" kind: CiliumNetworkPolicy metadata: name: "l3-rule" spec: endpointSelector: matchLabels: role: backend ingress: - fromEndpoints: - matchLabels: role: frontend ``` This commit implements approach `02` due to the following reasons - The namespaces belong to the tenant already, extra `TenantResource` seems redundant given that the lifecycle of the additional resources are tied to the `Tenant`. How does the crd look like now ? ---- ```yaml apiVersion: capsule.clastix.io/v1beta1 kind: Tenant metadata: name: oil spec: resyncPeriod: 60s additionalResources: namespaceSelector: matchLabels: tier: one items: - | apiVersion: v1 kind: Pod metadata: name: nginx labels: app.kubernetes.io/name: proxy spec: containers: - name: nginx image: nginx:11.14.2 ports: - containerPort: 80 name: http-web-svc - | apiVersion: v1 kind: Service metadata: name: nginx-service labels: app.kubernetes.io/name: proxy spec: selector: app.kubernetes.io/name: proxy ports: - name: name-of-service-port protocol: TCP port: 80 targetPort: http-web-svc owners: - name: alice kind: User ``` The difference with the proposed crd is, items are strings of k8s objects in yaml format. I ran through decoding issues when I tried to use `Unstructured` so I decided to settle on strings instead. How it works ? ---- We search for namespaces specified by `namespaceSelector` that are owned by the tenant. For each matched namespace, apply all resources specified in `additionalResources.items` on any error, reschedule next reconciliation by `resyncPeriod`. What is missing ? ---- - [ ] Tests - [ ] What happens when a tenant is deleted ? - [ ] What happens when a tenant is deleted ? - [ ] Does `additionalRoleBindings` cover for `clusterRoles` defined in approach 01? I will wait for feedback/discussion on how to proceed from here.
This attempts to address projectcapsule#525 which is related to projectcapsule#416. problem this commit is trying to solve ---- ``` Alice would like to create several resources on each of their Namespaces. Upon the creation of each Namespace, they have to create the desired resources on each Namespace ``` On the above linked tickets there are two proposed approaches approach 01 ---- Create a new resource `TenantResource` something like this ```yaml apiVersion: capsule.clastix.io/v1beta2 kind: TenantResource metadata: name: database namespace: solar-management spec: resyncPeriod: 60s additionalResources: namespaceSelector: matchLabels: tier: one items: - apiVersion: presslabs.io/v1beta1 kind: MySQL spec: foo: bar clusterRoles: namespaceSelector: matchLabels: tier: one items: - name: database-admin subjects: - kind: Group name: developers ``` approach 02 ----- Extend `Tenant` to support addtional resources ```yaml apiVersion: capsule.clastix.io/v1beta1 kind: Tenant metadata: name: gas spec: additionalResources: - apiVersion: "cilium.io/v2" kind: CiliumNetworkPolicy metadata: name: "l3-rule" spec: endpointSelector: matchLabels: role: backend ingress: - fromEndpoints: - matchLabels: role: frontend ``` This commit implements approach `02` due to the following reasons - The namespaces belong to the tenant already, extra `TenantResource` seems redundant given that the lifecycle of the additional resources are tied to the `Tenant`. How does the crd look like now ? ---- ```yaml apiVersion: capsule.clastix.io/v1beta1 kind: Tenant metadata: name: oil spec: resyncPeriod: 60s additionalResources: namespaceSelector: matchLabels: tier: one items: - | apiVersion: v1 kind: Pod metadata: name: nginx labels: app.kubernetes.io/name: proxy spec: containers: - name: nginx image: nginx:11.14.2 ports: - containerPort: 80 name: http-web-svc - | apiVersion: v1 kind: Service metadata: name: nginx-service labels: app.kubernetes.io/name: proxy spec: selector: app.kubernetes.io/name: proxy ports: - name: name-of-service-port protocol: TCP port: 80 targetPort: http-web-svc owners: - name: alice kind: User ``` The difference with the proposed crd is, items are strings of k8s objects in yaml format. I ran through decoding issues when I tried to use `Unstructured` so I decided to settle on strings instead. How it works ? ---- We search for namespaces specified by `namespaceSelector` that are owned by the tenant. For each matched namespace, apply all resources specified in `additionalResources.items` on any error, reschedule next reconciliation by `resyncPeriod`. What is missing ? ---- - [ ] Tests - [ ] What happens when a tenant is deleted ? - [ ] What happens when a tenant is deleted ? - [ ] Does `additionalRoleBindings` cover for `clusterRoles` defined in approach 01? I will wait for feedback/discussion on how to proceed from here.
This attempts to address projectcapsule#525 which is related to projectcapsule#416. problem this commit is trying to solve ---- ``` Alice would like to create several resources on each of their Namespaces. Upon the creation of each Namespace, they have to create the desired resources on each Namespace ``` On the above linked tickets there are two proposed approaches approach 01 ---- Create a new resource `TenantResource` something like this ```yaml apiVersion: capsule.clastix.io/v1beta2 kind: TenantResource metadata: name: database namespace: solar-management spec: resyncPeriod: 60s additionalResources: namespaceSelector: matchLabels: tier: one items: - apiVersion: presslabs.io/v1beta1 kind: MySQL spec: foo: bar clusterRoles: namespaceSelector: matchLabels: tier: one items: - name: database-admin subjects: - kind: Group name: developers ``` approach 02 ----- Extend `Tenant` to support addtional resources ```yaml apiVersion: capsule.clastix.io/v1beta1 kind: Tenant metadata: name: gas spec: additionalResources: - apiVersion: "cilium.io/v2" kind: CiliumNetworkPolicy metadata: name: "l3-rule" spec: endpointSelector: matchLabels: role: backend ingress: - fromEndpoints: - matchLabels: role: frontend ``` This commit implements approach `02` due to the following reasons - The namespaces belong to the tenant already, extra `TenantResource` seems redundant given that the lifecycle of the additional resources are tied to the `Tenant`. How does the crd look like now ? ---- ```yaml apiVersion: capsule.clastix.io/v1beta1 kind: Tenant metadata: name: oil spec: resyncPeriod: 60s additionalResources: namespaceSelector: matchLabels: tier: one items: - | apiVersion: v1 kind: Pod metadata: name: nginx labels: app.kubernetes.io/name: proxy spec: containers: - name: nginx image: nginx:11.14.2 ports: - containerPort: 80 name: http-web-svc - | apiVersion: v1 kind: Service metadata: name: nginx-service labels: app.kubernetes.io/name: proxy spec: selector: app.kubernetes.io/name: proxy ports: - name: name-of-service-port protocol: TCP port: 80 targetPort: http-web-svc owners: - name: alice kind: User ``` The difference with the proposed crd is, items are strings of k8s objects in yaml format. I ran through decoding issues when I tried to use `Unstructured` so I decided to settle on strings instead. How it works ? ---- We search for namespaces specified by `namespaceSelector` that are owned by the tenant. For each matched namespace, apply all resources specified in `additionalResources.items` on any error, reschedule next reconciliation by `resyncPeriod`. What is missing ? ---- - [ ] Tests - [ ] What happens when a tenant is deleted ? - [ ] What happens when a tenant is deleted ? - [ ] Does `additionalRoleBindings` cover for `clusterRoles` defined in approach 01? I will wait for feedback/discussion on how to proceed from here.
@prometherion even if it is a good and smart design, I’m having some concerns about this approach. Who’s in charge of creating the |
@bsctl @prometherion Have you had further discussions about this feature? |
@oliverbaehler my bad, missed this notification. Since #644, we're planning this for the v0.2.0 release, and landing in the |
The idea is to support both Namespaced resources, so Tenant Owner, if enabled by RBAC, can create their own replication resources, and in the Tenant specification for |
After a chat with the maintainer, we decided to go in a direction where two new CRDs are introduced, a cluster-scoped one named apiVersion: capsule.clastix.io/v1beta2
kind: GlobalTenantResource
metadata:
name: green-production
spec:
tenantSelector:
matchLabels:
energy: green
resyncPeriod: 60s
pruningOnDelete: true
resources:
- namespaceSelector:
matchLabels:
environment: production
additionalMetadata:
labels:
labels.energy.io: green
annotations:
annotations.energy.io: green
namespacedItems:
- apiVersion: v1
kind: Secret
namespace: default
selector:
matchLabels:
replicate: green
rawItems:
- apiVersion: v1
kind: Secret
metadata:
name: raw-secret-1
- apiVersion: v1
kind: Secret
metadata:
name: raw-secret-2
- apiVersion: v1
kind: Secret
metadata:
name: raw-secret-3 apiVersion: capsule.clastix.io/v1beta2
kind: TenantResource
metadata:
name: wind-objects
spec:
resyncPeriod: 60s
pruningOnDelete: true
resources:
- namespaceSelector:
matchLabels:
environment: production
additionalMetadata:
labels:
labels.energy.io: wind
annotations:
annotations.energy.io: wind
namespacedItems:
- apiVersion: v1
kind: Secret
namespace: wind-production
selector:
matchLabels:
replicate: solar
rawItems:
- apiVersion: v1
kind: Secret
metadata:
name: wind-secret-1
- apiVersion: v1
kind: Secret
metadata:
name: wind-secret-2
- apiVersion: v1
kind: Secret
metadata:
name: wind-secret-3 The last PR attached to this issue contains all the required information, along with the CRDs documentation. |
Current design
Alice would like to create several resources on each of their Namespaces. Upon the creation of each Namespace, they have to create the desired resources on each Namespace. Any automation cannot be put in place here.
Desired design
A new CRD could be crafted as follow.
The Namespace-scoped resource
TenantResource
would allowalice
to create on each Tenant namespace resource declared in the/spec/additionalResources
key.These resources (like a DB CRD) would require RBAC to
developers
, granted using the/spec/clusterRoles
specification key.Replication of these resources could be furthermore fine-grained using a
namespaceSelector
, allowing to define on which Namespace these should be replicated.A reconciliation to ensure the selected status is expected is controlled by the
resyncPeriod
, a time-based value that allows to poll the current state of the resources and ensure the desired one.This new custom resource definition is going to be defined as Namespace-scoped in order to allow Tenant owners to create their resources in a fashion automated way.
Proposal contributors
Looking for contributors for this feature!
The text was updated successfully, but these errors were encountered: