We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Describe the bug Ansible automation fails during HA cluster deployment.
To Reproduce
2020-03-28T22:24:44.9969616Z 22:24:44 INFO cli.engine.ansible.AnsibleCommand - TASK [kubernetes_master : Check if old flannel daemonset exists] *************** 2020-03-28T22:24:50.2462992Z 22:24:50 INFO cli.engine.ansible.AnsibleCommand - fatal: [ci-haazurrhelcanal-kubernetes-master-vm-0]: FAILED! => {"changed": false, "cmd": "kubectl get ds kube-flannel-ds --namespace kube-system\n", "delta": "0:00:03.798104", "end": "2020-03-28 22:24:49.981120", "failed_when_result": true, "msg": "non-zero return code", "rc": 1, "start": "2020-03-28 22:24:46.183016", "stderr": "Error from server: etcdserver: leader changed", "stderr_lines": ["Error from server: etcdserver: leader changed"], "stdout": "", "stdout_lines": []}
Expected behavior Multi-master Kubernetes cluster should deploy without issues.
Config files Any standard HA config should have a chance to reproduce the issue.
OS (please complete the following information): Ubuntu, RHEL, CentOS(?)
Cloud Environment (please complete the following information): Azure
Additional context Happens randomly, it's difficult to reproduce.
The text was updated successfully, but these errors were encountered:
@przemyslavic can you please review if that bug is still valid? If yes then can you please provide steps to reproduce?
Sorry, something went wrong.
I am not able to reproduce with the latest epicli version. If the error occurs again, a new issue will be created.
epicli
No branches or pull requests
Describe the bug
Ansible automation fails during HA cluster deployment.
To Reproduce
Expected behavior
Multi-master Kubernetes cluster should deploy without issues.
Config files
Any standard HA config should have a chance to reproduce the issue.
OS (please complete the following information):
Ubuntu, RHEL, CentOS(?)
Cloud Environment (please complete the following information):
Azure
Additional context
Happens randomly, it's difficult to reproduce.
The text was updated successfully, but these errors were encountered: