-
Notifications
You must be signed in to change notification settings - Fork 39.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Flaky Test] Kubernetes e2e suite: [sig-apps] Deployment iterative rollouts should eventually progress #100314
Comments
Looks like |
/sig apps |
It looks like the failures are coming from default
|
@atiratree mind checking this one out |
@soltysh ProgressDeadlineExceededMost of the tests fail because of this. Some pods from active Deployment/Replicaset get run successfully, but some of them fail to start and they CrashLoopBackOff and fail to start repeatedly. Then Deployment reaches ProgressDeadlineSeconds in 30s and the Progressing condition fails. Nevertheless, the status checking still continues with its own 5m timeout. I cannot find what causes the failed containers. The number of ReadyReplicas vary between each run. I think this is due to the contended environment since most of the test fails happen in tandem with tens of other tests failing as well. Also I can reproduce this locally. We can bump the ProgressDeadlineSeconds to 5m to at least give it a chance to recover before the checking status timeout runs out. example cherry picked logs for the last container that triggers ReplicaSet update (from https://prow.k8s.io/view/gs/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gce-network-proxy-grpc/1399579263820107776)
build.log
Status not updatedOne test (https://prow.k8s.io/view/gs/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gce-containerd/1399452929412304896) fails because of the following issue: Here The deployment status and LastUpdateTime is stuck and doesn't change in whole 5 minutes.
The last step is Nevertheless, I couldn't reproduce this. I tried to simulate the same order of previously random operations as captured by the log with a new rollout at the end. The replicaset and deployment controllers are woken up a few times and nothing interesting happens. No new replica appears.
OtherI also found a bug that Pause is not tested properly (fixed by #102730) |
/milestone v1.22 |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
looks like @atiratree crushed this, testgrid looks fine and both #102736 and #102730 are merged |
@afirth: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Which jobs are flaking:
ci-kubernetes-kind-ipv6-e2e-parallel #1371633997716656128
ci-kubernetes-kind-e2e-parallel #1371833810513039360
Which test(s) are flaking:
Kubernetes e2e suite: [sig-apps] Deployment iterative rollouts should eventually progress
Testgrid link:
https://testgrid.k8s.io/sig-release-master-blocking#kind-master-parallel&exclude-non-failed-tests=
https://testgrid.k8s.io/sig-release-master-blocking#kind-ipv6-master-parallel&exclude-non-failed-tests=
Reason for failure:
Anything else we need to know:
https://prow.k8s.io/view/gs/kubernetes-jenkins/logs/ci-kubernetes-kind-e2e-parallel/1371833810513039360
https://prow.k8s.io/view/gs/kubernetes-jenkins/logs/ci-kubernetes-kind-ipv6-e2e-parallel/1371633997716656128
https://storage.googleapis.com/k8s-gubernator/triage/index.html?test=Deployment%20iterative%20rollouts%20should%20eventually%20progress
The text was updated successfully, but these errors were encountered: