-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
CABPK creates multiple machines with kubeadm init #3072
Comments
@kanwar-saad: The label(s) In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/area bootstrap |
/remove-kind bug @kanwar-saad I have tried to reproduce locally but it is working just fine for me: Only one master gets provisioned at first
After the first master is up, the 2nd and 2rd start provisioning
At the end I get 3 master up and running (NotReady is because I have not installed CNI)
The only thing I can notice in your logs is
And this makes me to think that somehow you are not starting from a clean state. Could you make a test ensuring all the secrets are removed at first? |
/milestone Next |
@fabriziopandini true, it is referred to a cluster without KCP in this case. |
Apologies, I missed the |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/close |
@fabriziopandini: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
What steps did you take and what happened:
[A clear and concise description on how to REPRODUCE the bug.]
I am creating control plane machines directly without kubeadmcontrolplanecontroller. If I create multiple controlplane machines back to back, the CAPK starts provisioning all of them at once and all machines have kubeadm init in their userdata instead of only for the first master.
When creating KubeadmConfig object I set kubeadm init config only for first master and join config fields for the other two masters.
What did you expect to happen:
Anything else you would like to add:
I tried adding a 4 sec delay between creation of machines but still the same result.
Environment:
kubectl version
): v1.17.3/etc/os-release
): SLES/kind bug
capi_ha.log
/area bootstrap
[One or more /area label. See https://github.com/kubernetes-sigs/cluster-api/labels?q=area for the list of labels]
The text was updated successfully, but these errors were encountered: