-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Multi node config #147
Multi node config #147
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thank you for this work @fabriziopandini
added some comments, basically non-blocking, but i have added some ❓ marks here and there too.
i think this type of replication control would be pretty cool. while this structure provides means to configure easily per node, i think we should also expose a way to have a common configuration. i wonder if we can have something like a ConfigMap: also how about multi-cluster, was the config proposal for that discussed already? |
b1cdae3
to
eb0eb45
Compare
@BenTheElder
IMO providing a shortcut for configuration changes is part of #133.
As far as I understand multi cluster is supported in |
using the already existing decoupling, would it be possible to configure a multi-cluster topology where an external load balancer serves in front of both clusters? thanks for the updates. |
In my hacky prototype I'm creating an ha proxy for each cluster, and I think I will start from this assumption also in moving this feature upstream because implementation will be isimpler without cross cluster dependencies. |
This looks really great @fabriziopandini looking forward to take it for a spin. |
related: https://github.com/kubernetes-sigs/federation-v2 is doing some work with |
@neolit123 if I'm not wrong all the comment s are addressed. |
thanks for the updates @fabriziopandini |
the more i think about it the more i prefer the node list under a single config parent object: kind: KindConfig
apiVersion: kind.sigs.k8s.io/v1alpha2
nodes:
- role: control-plane
replicas: 3
- role: worker
replicas: 2
- role: external-load-balancer i see this method as easier to "UX" and maintain. @fabriziopandini @BenTheElder please comment on the above proposal.. |
@neolit123 Said that, I think that at this stage it is really important to go out as soon as possible in order to start getting using feedback and unblock the remaining part of multi-node activities (some new requirements for config can be identified along the way as well) |
I finally got some time to just think about this some more and I think the nodes should be a list in a top level object. like @neolit123 pointed out
|
Apologies for the delay on this PR, KubeCon and then post-con "plague" (a cold?) built up a pretty large backlog of reviews. Will be getting to these more regularly now. |
@BenTheElder I hope you are well now! I'm going to address comments because I think it is important to get a first release merged ASAP to get moving also following parts of multi-node efforts, but IMO this leads to a more fragile design. Nevertheless, this is not blocking because we can always iterate in the future. Those are the point that IMO will be the candidate for future iterations:
|
Feeling a bit better, thanks :-) Note: I do not want to rush multi-node too much, we are already in need to removing some technical debt from kind. Previous iterations had multi-node and were not right. These are interesting points but I'm not convinced.
The list would become a subset. The reason for doing this is to always give the rest of the code a "Config" object to work with, which should be as much as possible simply something read from disk and desearialized with minimal mutation (just defaulting).
The Node fields take precedence over the config wide if any of them somehow clashed. They generally shouldn't though. The same is said in k8s of say resource requests vs inline requests.
It only assumes there is a list of "Node" type - why wouldn't there be? Nodes can have different roles.
I don't think the cluster API in it's current state is desirable to emulate for local clusters. Kubeadm only has separate objects due to the phases. We're not exposing that. There's also no real reason these can't be combined into a single object, I don't think we want the burden of nodes with different GVK. In general elevating individual node specs to have GVK seems very problematic. |
Having nodes as a field also means we can default adding a single nodespec when no nodes are specified. |
@BenTheElder I would like to discuss with you how to best address the following point
Currently, we are using a replica field at node level. e.g. from @neolit123 example
However this is something the rest of the code should not work with, because the rest of the code needs something where the expected "config nodes" are derived from to the replicas number; additionally this "derive" process should take care also of assigning unique names to nodes. e.g. the list of "config nodes" for the above example should be:
Now that we are exposing the Config object as a public object type, it becames harder to keep the derived part "hidden" from the api-machinery that handles type conversion and defaulting, so I think it will be useful to do a quick pair review before investing to much time into this refactoring |
I don't think we need to derive them on the object all. Config is config,
and at runtime we can have a completely distinct structure with the node
handles we've produced. When creating any node we can note the replication
number in the config.
…On Thu, Dec 20, 2018, 02:34 Fabrizio Pandini ***@***.*** wrote:
@BenTheElder <https://github.com/BenTheElder> I would like to discuss
with you how to best address the following point
always give the rest of the code a "Config" object to work with, which
should be as much as possible simply something read from disk and
desearialized with minimal mutation (just defaulting).
Currently, we are using a replica field at node level. e.g. from
@neolit123 <https://github.com/neolit123> example
kind: KindConfig
apiVersion: kind.sigs.k8s.io/v1alpha2
nodes:
- role: control-plane
replicas: 3
- role: worker
replicas: 2
- role: external-load-balancer
However this is something the rest of the code should not work with,
because the rest of the code needs something where the expected "config
nodes" are derived from to the replicas number; additionally this "derive"
process should take care also of assigning unique names to nodes. e.g. the
list of "config nodes" for the above example should be:
derivedNodes:
- role: control-plane
name: control-plane1
- role: control-plane
name: control-plane2
- role: control-plane
name: control-plane3
- role: worker
name: worker1
- role: worker
name: worker2
- role: external-load-balancer
name: lb
Now that we are exposing the Config object as a public object type, it
becames harder to keep the derived part "hidden" from the api-machinery
that handles type conversion and defaulting, so I think it will be useful
to do a quick pair review before investing to much time into this
refactoring
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#147 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AA4Bq7cFNvgTFt3h6UQw-Z1_jhS7uS_Zks5u62fMgaJpZM4ZArB1>
.
|
i think we should have a discussion about this eventually. |
5553014
to
c8ff127
Compare
@BenTheElder @neolit123 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thanks for the update @fabriziopandini
had a pass at the diff today and LGTM.
i think the new structure is better and more intuitive. the boiler plate presented by the GVK metadata on the user side for nodes was less optimal.
@BenTheElder i think we should move forward with this change and iterate on v3 soon if needed.
as a side comment i think the config should not be in this namespace:
kind/pkg/cluster/config/
, but rather pkg/config
which follows the pattern we have for k/staging:
https://github.com/kubernetes/kubernetes/tree/master/staging/src/k8s.io
kubeadm on the other hand uses apis/kubeadm which is also non-standard.
plan is to iterate on this, @fabriziopandini's code is quite excellent and will serve as a great MVP for multi-node, thank you so much for working on this. As discussed, we're opting to go ahead and merge this now and handle bikeshedding via follow up PR instead of infinite back and forth on the existing PRs 😉 Sorry for the delay, I was maybe going to cut another smaller release before putting these in, but we're just going to forge ahead now. /hold cancel |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgtm
/approve
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: BenTheElder, fabriziopandini The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
fixed kubeconfig issue
This PR completes the implementation of support for multi node configurations in Kind, leveraging on changes introduced by #137 and #143
fixes: #130
Note for reviewers
While implementing I tried to keep the kind Config surface as minimal as possible and now the
v1alpha2
version of kind configuration has only one object calledNode
, that can be repeated many times in a yaml document and/or set to automatically generate replicas. e.g.Nodes are then grouped in a
Config
object, but this object exists only in the internal version of kind configuration and so the user should not care about it.Automatic conversion from
v1alpha1
configurations is supported.Finally, please note that while the
v1alpha2
version of kind configuration supports multi-node, the rest of Kind still not, so the user will be temporarily blocked when trying to create clusters with n>1 nodes.