-
Notifications
You must be signed in to change notification settings - Fork 580
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add optional v1alpha4 and self-hosted management cluster tests #2833
Add optional v1alpha4 and self-hosted management cluster tests #2833
Conversation
/test ? |
@randomvariable: The following commands are available to trigger required jobs:
The following commands are available to trigger optional jobs:
Use
In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/test pull-cluster-api-provider-aws-e2e |
/milestone v1.0.0 |
/test pull-cluster-api-provider-aws-e2e |
/test pull-cluster-api-provider-aws-e2e |
3cc609c
to
fc08e02
Compare
/test pull-cluster-api-provider-aws-e2e |
1 similar comment
/test pull-cluster-api-provider-aws-e2e |
fc08e02
to
795baa8
Compare
I am able to replicate locally. After the upgrade, all deployment machines come up but |
Is that a separate issue to the one that keeps showing up for all the ones using a remote management cluster?
|
They are the same, that deadline is exceeded because CAPI controllers don't reconcile anything. And CAPI logs do not show any error but also it doesn't reconcile when I make changes to MachineDeployment. I will check with Stefan/Fabrizio when they are online. [UPDATE]: Failures I saw was probably due to using an unsupported version of K8s. |
If you trace the lines back, then it's waiting for the deployment of CAPA controllers to succeed with the appv1.Deployment of the controller coming up. Following the trace to
gets https://github.com/kubernetes-sigs/cluster-api/blob/main/test/framework/clusterctl/clusterctl_helpers.go#L153-L166 |
Just ran in us-west-2, and it's working fine locally. :( |
One thing to try might be to see if the e2e image upload is actually working. Will need to to look at the logs mid run, but there will be a log line along the lines of:
at that point , run
Note that you'll need to change s3:/// to s3:// due to a typo in my log line. |
Thanks, will check out that. /test pull-cluster-api-provider-aws-e2e |
Yeah, the bucket is empty during the test run. I think something is deleting the S3 image. Maybe it's a lambda or some security thing deleting any object made publicly readable. Will probably need to create an S3 bucket policy that allows members of the AWS account to read, but not set publicly readable. Might have to check if this is what is happening with k8s infra / sig testing folk. |
It is now in mid run and looks like the image is missing: |
I think I'm confident to say that the upgrades are working based on local testing, and would suggest reverting the timeout changes and the skip, and then changing Describe to PDescribe for the v1alpha3 upgrade test (rather than comment out), merge in and do a release. Thoughts @richardcase ?. PS EKS upgrade test currently isn't possible because of #2828 dependencies. |
8131c39
to
cc00bcc
Compare
/test pull-cluster-api-provider-aws-e2e |
Yay @sedefsavas the e2e passed. |
/lgtm |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: sedefsavas The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Sounds good to me, especially as the e2e is passing. The timeout changes remain changed but all good: /lgtm |
@richardcase These tests were already passing. The issue is with the upgrade test which I disabled in this PR. |
/hold cancel |
Signed-off-by: Naadir Jeewa [email protected]
What type of PR is this?
/kind failing-test
/area testing
What this PR does / why we need it:
Which issue(s) this PR fixes (optional, in
fixes #<issue number>(, fixes #<issue_number>, ...)
format, will close the issue(s) when PR gets merged):Part of #2788
Part of #2793
Special notes for your reviewer:
Checklist:
Release note: