-
Notifications
You must be signed in to change notification settings - Fork 431
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add template to test ipv6 and dual stack with k8s CI versions #4086
Add template to test ipv6 and dual stack with k8s CI versions #4086
Conversation
Codecov ReportAll modified lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## main #4086 +/- ##
=======================================
Coverage 57.63% 57.63%
=======================================
Files 188 188
Lines 19202 19202
=======================================
Hits 11067 11067
Misses 7505 7505
Partials 630 630 ☔ View full report in Codecov by Sentry. |
@nojnhuh the broken link error is coming from https://github.com/nojnhuh/cluster-api-provider-azure/tree/aso/azure/services/asogroups which is referred to in the ASO proposal. I'm guessing that branch no longer exists... Is there a good replacement for it? |
/test pull-cluster-api-provider-azure-ipv6-conformance-with-ci-artifacts |
@CecileRobertMichon: The specified target(s) for
The following commands are available to trigger optional jobs:
Use
In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/test pull-cluster-api-provider-azure-conformance-ipv6-with-ci-artifacts |
1 similar comment
/test pull-cluster-api-provider-azure-conformance-ipv6-with-ci-artifacts |
/test pull-cluster-api-provider-azure-conformance-ipv6-with-ci-artifacts |
1 similar comment
/test pull-cluster-api-provider-azure-conformance-ipv6-with-ci-artifacts |
a939e03
to
e504625
Compare
/test pull-cluster-api-provider-azure-conformance-ipv6-with-ci-artifacts |
/retest |
43beb70
to
b55eee0
Compare
/test pull-cluster-api-provider-azure-conformance-ipv6-with-ci-artifacts |
b55eee0
to
0151252
Compare
/test pull-cluster-api-provider-azure-conformance-ipv6-with-ci-artifacts |
This is ready for review /assign @nawazkh @jackfrancis |
templates/test/ci/prow-ci-version-dual-stack/patches/machine-deployment.yaml
Outdated
Show resolved
Hide resolved
0151252
to
ef5bdf5
Compare
/test pull-cluster-api-provider-azure-conformance-ipv6-with-ci-artifacts |
/hold cancel rebase is done |
/hold cancel |
ef5bdf5
to
34feb23
Compare
34feb23
to
51ce7cf
Compare
/test pull-cluster-api-provider-azure-conformance-ipv6-with-ci-artifacts |
/retest |
/lgtm |
LGTM label has been added. Git tree hash: 128e0ce72dab03bcf7d89e8989d2d6382bbd8e5c
|
/lgtm |
/assign @jackfrancis @mboersma |
@@ -0,0 +1,12 @@ | |||
ginkgo.focus: \[Feature\:Networking-IPv6\] | |||
ginkgo.skip: \[Feature\:SCTPConnectivity\]|\[Experimental\] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why are we skipping [Experimental]
in ipv6 but not dual-stack? Do [Experimental]
-tagged test scenarios require ipv4, and our k8s dual-stack clusters are configured in such a way that those ipv4-dependent tests can be reliably tested?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Because there is no [Experimental]
dual-stack test but there was one ipv6 one that was failing https://prow.k8s.io/view/gs/kubernetes-jenkins/pr-logs/pull/kubernetes-sigs_cluster-api-provider-azure/4086/pull-cluster-api-provider-azure-conformance-dual-stack-with-ci-artifacts/1711771121797304320
I could add it to both but the dual-stack one would be a no-op, with ginkgo skip I prefer to only skip tests only as needed.
CI_URL="https://storage.googleapis.com/k8s-release-dev/ci/$${CI_VERSION}/bin/linux/amd64" | ||
for CI_PACKAGE in "$${PACKAGES_TO_TEST[@]}"; do | ||
echo "* downloading binary: $$CI_URL/$$CI_PACKAGE" | ||
wget --inet4-only "$$CI_URL/$$CI_PACKAGE" -nv -O "$$CI_DIR/$$CI_PACKAGE" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I assume that this --inet4-only
flag has nothing to do w/ ipv4 k8s clusters (sort of a funny coincidence that this change would land in this PR :)).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It does, this is to ensure we always connect to the ipv4 IP address of https://storage.googleapis.com/k8s-release-dev
, IPv6 one was not reachable. It's a no-op for the ivp4 template since we always used the ipv4 adress but it ensure the ipv6/dual stack templates which are running on dual stack hosts use the ipv4 address.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ACK, was a little strange at first to see this in the ipv6 template, but it makes sense that the actual OS is dual stack (ipv6-only config is just the k8s surface area)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgtm
unless any of the questions I posed are actionable
@jackfrancis can you please approve if all your comments were addressed? |
/approve |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: CecileRobertMichon The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
What type of PR is this?
What this PR does / why we need it: Adds templates for ipv6 and dual stack using release-branch CI k8s versions so we can test various k8s release branches with CAPZ before releases to catch future regressions like kubernetes/kubernetes#120999.
Which issue(s) this PR fixes (optional, in
fixes #<issue number>(, fixes #<issue_number>, ...)
format, will close the issue(s) when PR gets merged):Fixes #
Special notes for your reviewer:
TODOs:
Release note: