Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

🐛 Allow KCP to Update when CoreDNS version doesn't change #5986

Merged

Conversation

killianmuldoon
Copy link
Contributor

Signed-off-by: killianmuldoon [email protected]

This change allows KCP to perform an upgrade when the version of CoreDNS is not increased. This will allow older branches to stay on the same CoreDNS version when performing upgrade e2e tests for later versions of Kubernetes.

Fixes #5952

@k8s-ci-robot k8s-ci-robot added the cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. label Jan 25, 2022
@killianmuldoon killianmuldoon changed the title 🐛 allow KCP to Update when CoreDNS version doesn't change 🐛 [WIP] allow KCP to Update when CoreDNS version doesn't change Jan 25, 2022
@k8s-ci-robot k8s-ci-robot added the size/S Denotes a PR that changes 10-29 lines, ignoring generated files. label Jan 25, 2022
@killianmuldoon
Copy link
Contributor Author

@sbueringer I wasn't able to run the e2e this is supposed to fix properly locally - can we run them here?

@@ -500,7 +500,7 @@ func (in *KubeadmControlPlane) validateCoreDNSVersion(prev *KubeadmControlPlane)
)
return allErrs
}

// Note: This check allows an "upgrade" in the case where the versions are equal.
if err := migration.ValidUpMigration(fromVersion.String(), toVersion.String()); err != nil {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Member

@sbueringer sbueringer Jan 25, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would be nice if you can also adjust the error message in l.509:

fmt.Sprintf("cannot migrate CoreDNS from '%v' to '%v': %v", fromVersion, toVersion, err),

Feels way easier to read like that to me :)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think if we want to be able to support "no-ops" regarding CoreDNS version we can only use the validation of the migration tool here when fromVersion and toVersion are not equal

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agreed
The behavior of ValidUpMigration is kind of subtle because it handles fromVersion==ToVersion but only if the version is known (I got this digging into the code; also the inner error in the failing jobs is "start version '1.8.6' not supported").

So, if I got it right, what we want to do here is to support upgrading the kubernetes version of Clusters created with unsupported CoreDNS version, but without changing the CoreDNS version.

In order to achieve this, IMO the simplest option is to not call ValidUpMigration if fromVersion==ToVersion; eventually in a follow-up PR we can work on a better error message the return the maximum CoreDNS version a KCP release supports (by looking at migration.ValidVersions), but let's keep this out of the table for now so we have a minimal fix that can be backported

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Another thing we should consider, it to prevent users from creating clusters with unsupported CoreDNS versions; this will make things less clear and enforce the up limit of our version support matrix, but I think we should discuss this with the community first

if x := from.Compare(to); x > 0 || (x == 0 && fromTag == toTag) {
return fmt.Errorf("toVersion %q must be greater than fromVersion %q", toTag, fromTag)
// make sure that the version we're upgrading to is greater than or equal to the current version.
if to.LT(from) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

TBH I'm not sure that we need this change given that this func is not called if version are equal

// Return early if the from/to image is the same.
if info.FromImage == info.ToImage {
return nil
}

@k8s-ci-robot k8s-ci-robot added size/M Denotes a PR that changes 30-99 lines, ignoring generated files. and removed size/S Denotes a PR that changes 10-29 lines, ignoring generated files. labels Jan 26, 2022
@killianmuldoon killianmuldoon force-pushed the fix/kcp-coredns-check branch 3 times, most recently from a9eeefd to 7de85c6 Compare January 26, 2022 14:25
@killianmuldoon killianmuldoon changed the title 🐛 [WIP] allow KCP to Update when CoreDNS version doesn't change 🐛 Allow KCP to Update when CoreDNS version doesn't change Jan 26, 2022
@killianmuldoon
Copy link
Contributor Author

I've tested this locally and it looks like it's solving the base issue in our e2e upgrades.

@sbueringer
Copy link
Member

@killianmuldoon
WDYT about creating the cherry-pick PRs for v0.4 and v1.0 manually already. We can then confirm on those PRs via:

/test pull-cluster-api-e2e-workload-upgrade-1-23-latest-release-0-4
/test pull-cluster-api-e2e-workload-upgrade-1-23-latest-release-1-0

@k8s-ci-robot
Copy link
Contributor

@sbueringer: The specified target(s) for /test were not found.
The following commands are available to trigger required jobs:

  • /test pull-cluster-api-build-main
  • /test pull-cluster-api-e2e-main
  • /test pull-cluster-api-test-main
  • /test pull-cluster-api-test-mink8s-main
  • /test pull-cluster-api-verify-main

The following commands are available to trigger optional jobs:

  • /test pull-cluster-api-apidiff-main
  • /test pull-cluster-api-e2e-full-main
  • /test pull-cluster-api-e2e-informing-ipv6-main
  • /test pull-cluster-api-e2e-informing-main
  • /test pull-cluster-api-e2e-workload-upgrade-1-23-latest-main
  • /test pull-cluster-api-make-main

Use /test all to run the following jobs that were automatically triggered:

  • pull-cluster-api-apidiff-main
  • pull-cluster-api-build-main
  • pull-cluster-api-e2e-informing-ipv6-main
  • pull-cluster-api-e2e-informing-main
  • pull-cluster-api-e2e-main
  • pull-cluster-api-test-main
  • pull-cluster-api-test-mink8s-main
  • pull-cluster-api-verify-main

In response to this:

@killianmuldoon
WDYT about creating the cherry-pick PRs for v0.4 and v1.0 manually already. We can then confirm on those PRs via:

/test pull-cluster-api-e2e-workload-upgrade-1-23-latest-release-0-4
/test pull-cluster-api-e2e-workload-upgrade-1-23-latest-release-1-0

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot added size/S Denotes a PR that changes 10-29 lines, ignoring generated files. and removed size/M Denotes a PR that changes 30-99 lines, ignoring generated files. labels Jan 26, 2022
@sbueringer
Copy link
Member

sbueringer commented Jan 26, 2022

/lgtm

To be absolutely safe we can wait for the upgrade test results on the cherry-pick PRs (before merging all of those)

@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Jan 26, 2022
@sbueringer
Copy link
Member

/cherry-pick release-1.1

Assuming that we want to include this fix in all our currently supported versions (we already have cherry-picks for v0.4 and v1.0).

@k8s-infra-cherrypick-robot

@sbueringer: once the present PR merges, I will cherry-pick it on top of release-1.1 in a new PR and assign it to you.

In response to this:

/cherry-pick release-1.1

Assuming that we want to include this fix in all our currently supported versions (we already have cherry-picks for v0.4 and v1.0).

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@sbueringer
Copy link
Member

/test pull-cluster-api-e2e-workload-upgrade-1-23-latest-main

@sbueringer
Copy link
Member

@fabriziopandini I checked the test results and logs on the v0.4 and v1.0 PR, everything looks fine.

I think we should be able to merge this PR and the cherry-picks.

@fabriziopandini
Copy link
Member

Great work @sbueringer and @killianmuldoon!
/lgtm
/approve

@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: fabriziopandini

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Jan 27, 2022
@k8s-ci-robot k8s-ci-robot merged commit 72ef359 into kubernetes-sigs:main Jan 27, 2022
@k8s-ci-robot k8s-ci-robot added this to the v1.2 milestone Jan 27, 2022
@k8s-infra-cherrypick-robot

@sbueringer: new pull request created: #6011

In response to this:

/cherry-pick release-1.1

Assuming that we want to include this fix in all our currently supported versions (we already have cherry-picks for v0.4 and v1.0).

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. lgtm "Looks good to me", indicates that a PR is ready to be merged. size/S Denotes a PR that changes 10-29 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Failing tests for 0.4 and 1.0
5 participants