-
Notifications
You must be signed in to change notification settings - Fork 225
What is the proper scoping of our API types? #463
Comments
Option 3 with kube-system namespace is my personal preference. |
I like option 3 with the default namespace, so that a query that doesn't include a namespace gives you information about the current cluster. e.g. Also, if we go with |
Option 4 (Namespaces, with cluster field) Pros
Cons
|
+1 for the 4, possibly with validation to ensure that one and only one cluster with local=true goes into "default". (and nowhere else) Adjusting for any local behavior based on a cluster boolean does feel like a slight improvement vs comparing the namespace to a constant. |
Those are actually good suggestions. How about both? Locally manager clusters have their types stored in the default namespace, and they have a "LocallyManaged" status field that is set to true by deployment tool. Remote clusters live in other namespaces and have a "LocallyManaged" status field set to false. Seem good? |
@krousey I'm not sure that having the field in the status buys us anything. We would still be dependent upon the user putting the object in the correct namespace for their desired behavior, without any field that the user has to set to declare their intention. A field in the status would be nothing more than an after-the-fact indicator to the user about what happened. |
@krousey I would avoid setting any fields for that option. More suitable approach is to use annotation. I would choose option 2, because it's the most flexible one - you can name your namespaces however you want, put your machines in them and there is no magic or requirements. It's boring and simple. |
@staebler I envisioned the deployment tool would know if it's deploying for remote management or local management, and it could properly set the status. But then again, the whole point of this is to put all inputs in the spec. I want to punt on the local versus non-local right now. It can be done with annotations if needed at first. So we will go with option 2. Namespaces. |
Closing this issue. The new types were created with namespace scoping. |
This was brought up in the cluster API review, and I think it was discussed in last week's meeting. I'm creating this issue so that we have a concentrated and recorded place to discuss this.
Do we want to have machines and clusters namespaced or not?
Currently we have them cluster scoped because the prototype was designed for the types to be stored in (or aggregated with) the cluster they represented. The other use case is to store the types in another cluster for remote management. If we leave the types cluster scoped, we can only manage one cluster remotely. This is not a desirable limitation.
So the first major question: Do we want to support both local and remote management? I think the answer to this is obviously yes, but if not, we should go with the scoping that best solved the most desired use-case.
If we want to support both, I think we have a few of options. Feel free to propose others that are drastically different from what I present below.
Option 1 (Do it both ways!)
We support both namespace and cluster-scoped types in the API aggregation server. I imagine we could implement it in such a way that the aggregated API server either register the types as namespaced or not based on a flag.
Pros
Cons
Option 2 (Just namespaces)
We put every cluster in a namespace including local ones. Each namespace would have it's own Cluster object and several Machine objects.
Pros
Cons
Option 3 (Just namespaces, with special ones)
Just like option 2, we make all the types namespace-scoped. The only difference is we reserve (by convention or somehow codify it) a special namespace for local cluster management. I propose one of the following:
kubectl get <any cluster type>
would return the ones representing the local cluster by default.Pros
Cons
The text was updated successfully, but these errors were encountered: