-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Upgrading a workload cluster using ClusterClass with RuntimeSDK test is flaky with error: Resource versions didn't stay stable #10838
Comments
@Sunnatillo link is pointing to different failure. |
I updated it with correct link. |
/help |
@fabriziopandini: GuidelinesPlease ensure that the issue body includes answers to the following questions:
For more details on the requirements of such an issue, please see here and ensure that they are met. If this request no longer meets these requirements, the label can be removed In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
/assign |
I noticed that |
Fine to add, I don't think it will affect the results though |
Query to find the latest failures |
Improvement to make CAPD DockerMachinePools more deterministic: #10998 (I wouldn't expect it to solve the whole flake though) |
The CAPD flake seems to be gone now. We only have a relatively rare flake with KCP left: https://storage.googleapis.com/k8s-triage/index.html?text=Detected%20objects%20with%20changed%20resourceVersion&job=.*cluster-api.*e2e.*main&xjob=.*-provider-.* |
Will unassign myself for now but if this flake is persistent, I can take another look when I have time. /unassign |
The MachinePool flake (#11162) is a lot more frequent/problematic |
I'll investigate it. |
This issue is labeled with You can:
For more details on the triage process, see https://www.kubernetes.dev/docs/guide/issue-triage/ /remove-triage accepted |
/triage accepted |
Just a note that the last occurrence of this seems to have been on 11/14/2024: https://prow.k8s.io/view/gs/kubernetes-ci-logs/logs/periodic-cluster-api-e2e-latestk8s-main/1857118214752833536 |
Currently, I can't have time to work this. |
I think this still seems to happen (although the message changed): |
Which jobs are flaking?
capi-e2e-main
Which tests are flaking?
When upgrading a workload cluster using ClusterClass with RuntimeSDK [ClusterClass] [It] Should create, upgrade and delete a workload cluster
/home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/cluster_upgrade_runtimesdk.go:155
Testgrid link
Edited: https://prow.k8s.io/view/gs/kubernetes-jenkins/logs/periodic-cluster-api-e2e-mink8s-release-1-7/1809819550426861568
No response
Reason for failure (if possible)
No response
Anything else we need to know?
No response
Label(s) to be applied
/kind flake
/area ci
The text was updated successfully, but these errors were encountered: