Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Resolve release series labels in e2e config #5008

Closed
sbueringer opened this issue Jul 23, 2021 · 43 comments · Fixed by #9265
Closed

Resolve release series labels in e2e config #5008

sbueringer opened this issue Jul 23, 2021 · 43 comments · Fixed by #9265
Assignees
Labels
area/testing Issues or PRs related to testing kind/feature Categorizes issue or PR as related to a new feature. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. triage/accepted Indicates an issue or PR is ready to be actively worked on.

Comments

@sbueringer
Copy link
Member

sbueringer commented Jul 23, 2021

User Story

As a developer I would like to have an easily consumable way to find out the latest stable release of a CAPI release series (e.g. v0.3.x or v0.4.x).

Detailed Description
Kubernetes currently exposes the latest stable releases (overall or per release series) via:

This is useful when consuming Kubernetes for example in CI. We currently have the same use case as we want to reference the latest clusterctl binary of the v0.3.x release series in our clusterctl upgrade e2e test (xref: #4995 (comment))

[A clear and concise description of what you want to happen.]

Anything else you would like to add:

[Miscellaneous information that will assist in solving the issue.]

/kind feature

@k8s-ci-robot k8s-ci-robot added the kind/feature Categorizes issue or PR as related to a new feature. label Jul 23, 2021
@fabriziopandini
Copy link
Member

I like the idea.
At least in first iteration marker files can be stored in the same buckets we are using for nightly build artefacts

/area testing
even if this could have different applications

@k8s-ci-robot k8s-ci-robot added the area/testing Issues or PRs related to testing label Jul 23, 2021
@vincepri
Copy link
Member

Is there any automation we could reuse?

@vincepri
Copy link
Member

/priority important-longterm

@k8s-ci-robot k8s-ci-robot added the priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. label Jul 23, 2021
@sbueringer
Copy link
Member Author

sbueringer commented Jul 23, 2021

I have zero idea how they do it and where to start looking. If I would have to guess, those files are updated when publishing the actual k/k release (but I don't know anything about all this stuff). @dims?

@dims
Copy link
Member

dims commented Jul 23, 2021

@sbueringer i can dig things up, but probably easier for @palnabarun as knows a lot more of the release stuff.

@vincepri
Copy link
Member

/milestone Next

@k8s-ci-robot k8s-ci-robot added this to the Next milestone Jul 28, 2021
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Oct 26, 2021
@fabriziopandini
Copy link
Member

/lifecycle frozen

@k8s-ci-robot k8s-ci-robot added lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Oct 27, 2021
@randomvariable
Copy link
Member

/assign @sonasingh46

@k8s-ci-robot
Copy link
Contributor

@randomvariable: GitHub didn't allow me to assign the following users: sonasingh46.

Note that only kubernetes-sigs members, repo collaborators and people who have commented on this issue/PR can be assigned. Additionally, issues/PRs can only have 10 assignees at the same time.
For more information please see the contributor guide

In response to this:

/assign @sonasingh46

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@fabriziopandini
Copy link
Member

@sonasingh46 if you want, you can assign yourself to the issue (this should work also if you are not an org member yet)

@sonasingh46
Copy link

/assign sonasingh46

@killianmuldoon
Copy link
Contributor

@sonasingh46 Have you had time to work on this?

@sonasingh46
Copy link

Hey @killianmuldoon -- I did not get a chance to start on this. I can start looking on this one next week.

@killianmuldoon
Copy link
Contributor

If you're still willing to pick it up that would be awesome 😄 . We've got a lot of point release updates to do today, and something like this would really help automate the process!

@palnabarun
Copy link
Member

Sorry I missed the ping here!

We update the version markers during every release cut. krel (the Kubernetes release tooling) has a step in the process to update the marker file inside a publicly viewable GCS bucket. dl.k8s.io is just a redirect to that GCS bucket URL.

You can have a look at the code here:
https://github.com/kubernetes/release/blob/f55d5af19f1fab3e8cf6832abf331f95452a342d/pkg/release/publish.go#L155

@palnabarun
Copy link
Member

palnabarun commented Feb 3, 2022

My suggestions for a detailed plan here would be:

  1. Request a GCS bucket from sig-k8s-infra.
  2. Write some code using release.Publisher(...) to publish the marker to the GCS bucket.
  3. Run a Prow Job for every pushed tag / integrate code in (2) with the existing mechanism of releasing CAPI.

Let me know in case you need any help! Happy to review stuff.

@sonasingh46
Copy link

@palnabarun -- Thank you very much. Will jump on to this one and reach out for help and reviews.

@fabriziopandini
Copy link
Member

fabriziopandini commented Feb 4, 2022

After digging into this a little bit it comes to me that version resolution is relative to a provider (CAPI latest != CAPV latest)

Given this, using version marker in a GCS bucket gets complicated, because each provider should take care of changing the release process to publish them; therefore I'm proposing to use an alternative method to discover version info, similar to the one implemented in

rawURL := url.URL{
Scheme: "https",
Host: "proxy.golang.org",
Path: path.Join(gomodule, "@v", "/list"),
}
resp, err := http.Get(rawURL.String()) //nolint:noctx // NB: as we're just implementing an external interface we won't be able to get a context here.
if err != nil {
return "", err
}
defer resp.Body.Close()
out, err := io.ReadAll(resp.Body)
if err != nil {
return "", err
}
parsedTags := semver.Versions{}
for _, line := range strings.Split(string(out), "\n") {
if strings.HasPrefix(line, "v") {
parsedTags = append(parsedTags, semver.MustParse(strings.TrimPrefix(line, "v")))
}
}
sort.Sort(parsedTags)
var picked semver.Version
for i, tag := range parsedTags {
if !includePrereleases && len(tag.Pre) > 0 {
continue
}
if versionRange(tag) {
picked = parsedTags[i]
}
}

I also suggest the following UX in the docker.yaml file:

when configuring providers:

  - name: cluster-api
    type: CoreProvider
    versions:
-    - name: v0.3.23 # latest published release in the v1alpha3 series; this is used for v1alpha3 --> v1beta1 clusterctl upgrades test only.
-      value: "https://github.com/kubernetes-sigs/cluster-api/releases/download/v0.3.23/core-components.yaml"
+    - name: "{goproxy://sigs.k8s.io/[email protected]}"  # latest published release in the v1alpha3 series; this is used for v1alpha3 --> v1beta1 clusterctl upgrades test only.
+      value: "https://github.com/kubernetes-sigs/cluster-api/releases/download/{goproxy://sigs.k8s.io/[email protected]}/core-components.yaml"

when configuring clustectl binary in variables:

-  INIT_WITH_BINARY: "https://github.com/kubernetes-sigs/cluster-api/releases/download/v0.4.4/clusterctl-{OS}-{ARCH}"
+  INIT_WITH_BINARY: "https://github.com/kubernetes-sigs/cluster-api/releases/download/{goproxy://sigs.k8s.io/[email protected]}/clusterctl-{OS}-{ARCH}"

NOTE: I'm proposing an url like syntax because it makes things explicit and it also opens up to support alternatives way to discovery info if required

For starting we need to resolve the following type of markers:

  • stable-X.Y: --> Latest release in the series without pre-releases e.g. 0.3.25
  • latest-X.Y: --> Latest release in the series including pre-releases e.g. v1.1.0-rc0
    (eventually, in the future we will add more, e.g. for nightly)

Ideally, markers should be resolved when reading the docker.yaml config file, so all the consumers of the E2E can benefit from marker resolution without changing tests.

@sbueringer
Copy link
Member Author

Sounds perfect!

@sonasingh46
Copy link

Sounds good to me! @fabriziopandini

@sbueringer
Copy link
Member Author

Short note from this Slack post: https://kubernetes.slack.com/archives/C8TSNPY4T/p1643976829466139

I think we missed an edge case with the “version resolution logic”. We were assuming it’s enough to resolve the version in the e2e config file (CAPD: docker.yaml) when loading the file. But in case of INIT_WITH_BINARY we have to resolve the version in the clusterctl e2e test, because that config can be passed in:

  • either as env var / docker.yaml
  • or via ClusterctlUpgradeSpecInput

It doesn’t change a lot, only that we have to call the “version resolution func” also from the clusterctl upgrade test.

@k8s-ci-robot k8s-ci-robot added the triage/accepted Indicates an issue or PR is ready to be actively worked on. label Aug 5, 2022
@fabriziopandini
Copy link
Member

(doing some cleanup on old issues without updates)
/close
Unfortunately no one is picking up the issue, we can look up at the idea above even if the issue is closed.

@k8s-ci-robot
Copy link
Contributor

@fabriziopandini: Closing this issue.

In response to this:

(doing some cleanup on old issues without updates)
/close
Unfortunately no one is picking up the issue, we can look up at the idea above even if the issue is closed.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@sbueringer
Copy link
Member Author

sbueringer commented Apr 22, 2023

@killianmuldoon is this something that we want to add as an idea to the CI release team backlog? (improvement tasks issue or the project)

Basically it would make it possible to just reference a release series instead of a specific release in docker.yaml.

I think it's a worthwile improvement (we would then always test the latest patch releases and thus we are able to catch issues in new patch releases).

@killianmuldoon
Copy link
Contributor

@killianmuldoon is this something that we want to add as an idea to the CI release team backlog? (improvement tasks issue or the project)

Looks useful. Let me take a closer look and maybe add it to the release team's backlog.

/assign
/reopen

@k8s-ci-robot
Copy link
Contributor

@killianmuldoon: Reopened this issue.

In response to this:

@killianmuldoon is this something that we want to add as an idea to the CI release team backlog? (improvement tasks issue or the project)

Looks useful. Let me take a closer look and maybe add it to the release team's backlog.

/assign
/reopen

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@adilGhaffarDev
Copy link
Contributor

@killianmuldoon I can work on this task, if its ok. I am thinking of making markers like this {#getrelease goproxy://sigs.k8s.io/[email protected]}, and resolve these markers in start of e2e suite as suggested by @fabriziopandini .

@sbueringer
Copy link
Member Author

sbueringer commented May 25, 2023

What about simplifying to:

stable would then be compatible to go. Btw they maybe even have a library func somewhere to resolve version queries.
xref: #8725 (comment)

I don't know if we need #getrelease. I would keep it simple and not add it.

P.S. I know this is less consistent with the marker k/k is using, but the upside is that it is more consistent with go

@sbueringer
Copy link
Member Author

@fabriziopandini ^^

@killianmuldoon
Copy link
Contributor

@adilGhaffarDev Awesome - let me know if you need any help with it!

/assign @adilGhaffarDev

@killianmuldoon
Copy link
Contributor

@adilGhaffarDev have you had any luck with this?

@adilGhaffarDev
Copy link
Contributor

@adilGhaffarDev have you had any luck with this?

Yes its mostly done, I will create PR today. Sorry, I was on vacations, I came back last week.

@furkatgofurov7
Copy link
Member

Changing the priority to:

/priority important-soon

based on current CI Lead's suggestion (cc @nawazkh)

@k8s-ci-robot k8s-ci-robot added the priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. label Aug 22, 2023
@furkatgofurov7
Copy link
Member

This is tracked in: #9104

@killianmuldoon
Copy link
Contributor

/unassign

As @adilGhaffarDev is on the case 🙂

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/testing Issues or PRs related to testing kind/feature Categorizes issue or PR as related to a new feature. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. triage/accepted Indicates an issue or PR is ready to be actively worked on.
Projects
Development

Successfully merging a pull request may close this issue.