Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Inrease wait time for nodes going ready
Occasionally some nodes remain unready for ever presumably due to https://bugzilla.redhat.com/show_bug.cgi?id=1698253 which causes https://bugzilla.redhat.com/show_bug.cgi?id=1698624 Orthogonally some tests are timing out while the node eventually goes ready, hence this PR increases the polling time See, all failures: https://openshift-gce-devel.appspot.com/builds/origin-ci-test/pr-logs/pull/openshift_machine-api-operator/261/pull-ci-openshift-machine-api-operator-master-e2e-aws-operator/ e.g: https://openshift-gce-devel.appspot.com/build/origin-ci-test/pr-logs/pull/openshift_machine-api-operator/261/pull-ci-openshift-machine-api-operator-master-e2e-aws-operator/781/ ip-10-0-133-147.ec2.internal makes recover from deleted worker machines to fail: E0412 08:06:16.949021 4971 framework.go:448] Node "ip-10-0-133-147.ec2.internal" is not ready E0412 08:06:16.968104 4971 framework.go:448] Node "ip-10-0-133-147.ec2.internal" is not ready while in the next test it eventually goes ready: I0412 08:06:28.961206 4971 utils.go:233] Node "ip-10-0-133-147.ec2.internal". Ready: true. Unschedulable: false We are timing out only recently since the time for a node to go ready increased slightly and still to a reasonable amount of time. Is difficult to say though the reason for this yet, might be related to crio changes, to skew between bootimage and machine-os-content image and pivoting, CI cloud rate limits, or similar factors.
- Loading branch information