-
Notifications
You must be signed in to change notification settings - Fork 430
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
✨ automatically generate azure.json #802
Conversation
7f2f201
to
d6f93e8
Compare
/retitle [WIP] ✨ automatically generate azure.json just marking this as WIP so it doesn't merge as per PR pr description |
/test pull-cluster-api-provider-azure-e2e |
I tried doing this a few different ways, I found it worked best to break it out into a separate controller. For MachineDeployment/MachineSet/KubeadmControlPlane, all machines come from AzureMachineTemplate clones. For these machines we can share 1 secret for all machines in the group, it's wasteful to create 1 secret per machine. We do also need to allow for creating single machines which don't belong to any of those object types -- we filter with a predicate and check that the individual machine isn't a member of any of those resource types listed above which can share common azure json. We also add the associated object as an owner of the newly created secret, to get garbage collection curious what people think! |
cbccbcd
to
2954c26
Compare
@CecileRobertMichon thanks 👍 although this is WIP for |
controllers/helpers.go
Outdated
|
||
secret.Data = map[string][]byte{ | ||
"controlplane": controlPlaneConfig, | ||
"node": nodeConfig, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what are your thoughts on potentially needing to split up the azure.json for node into one for each machine deployment / machine pool in the future? Thinking of use cases where one of the configurations would change per node pool such as having multiple subnets
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
that's what this does already, it just always generates both controlplane and node versions for even when the MD can only be one or the other. we differentiate by referencing the desired key in the MD kubeadm config
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I guess it's pretty easy to use the control plane label to tell them apart and only use one key
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we don't actually plumb the per-machinepool subnet through right now because we get it from the cluster spec. But if we switched that to e.g. the machine spec, we could easily adjust the interfaces/params here to make it work.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
sorry for stream of consciousness -- I think I remembered the reason I didn't differentiate.
For machines, you can tell if they're control plane, machine set, or machine deployment based. If it's not any of those (i.e., a user created a standalone machine for some reason), you want to catch that and create a single-machine azure json.
For machine templates, the cluster is the owner. We would have to go back to the cluster, get the control plane ref, fetch that object, and check if the control plane uses this machine template or not. BUT there's not really a strict guarantee that the infra ref is KCP, so we can't use our knowledge of KCP schema to look up the azure machine template spec.
The other alternative that would allow is creating azure-json per control plane machine instead of per-machine template for the control plane. That's strictly worse anyway as we end up generating N times as many secrets as required (where N == control plane replicas), vs duplicating the data one time inside the secret data map.
fd89f9f
to
452f615
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Other than this lgtm (and we should verify that it doesn't break clusterctl move, haven't enabled that scenario in e2e yet)
haven't tested with clusterctl move, need to try that (first time!) |
@alexeldeib from the sounds of #840, clusterctl is probably broken with capz right now 😐 |
lgtm /assign @devigned @nader-ziada |
I tried running a few scenarios and seems to work for me /lgtm |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgtm
/approve |
In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: CecileRobertMichon The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
this is not the kind of pony energy I was going for ^ |
/honk |
In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
now that's what i'm talking about |
/retest |
@alexeldeib: The following test failed, say
Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
What this PR does / why we need it:
Automatically generates azure-json for machines/machine templates. Unfortunately 1 will be generated for every machine template (or machine not associated to a template) in the current form. This means if a user manually provides the secret to override, we would create two secrets. However with this implementation there are very few scenarios a user would really need to override: the only one I really see is cloud provider rate limiting configuration, which we don't expose at all. (future work 🙂)
Added
Also, I haven't actually added any differentiation yet between user-provided secret not matching expected.Which issue(s) this PR fixes (optional, in
fixes #<issue number>(, fixes #<issue_number>, ...)
format, will close the issue(s) when PR gets merged):related to #773 but doesn't close, strictly speaking
Special notes for your reviewer:
ready for reviewz
added tests, think we just need docsProbably need to add some unit tests and/or other validation, not 100% sure this does all the right things yet. Opening up for discussion since it's already quite big.FYI
squash time
I sorted the commits nicely for your reviewing pleasure. I'll keep them sorted as such.TODOs:
Release note: