-
Notifications
You must be signed in to change notification settings - Fork 505
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Discuss k/release repo tagging and branching policies #857
Comments
Thanks for capturing this, Tim! Let's brainstorm. /assign @justaugustus @calebamiles @tpepper |
In my humble opinion, the release code will be most maintainable and end user experience (those consuming our generated artifacts) most consistent if the release code is branched in conjunction with k/k. |
I was thinking the same w.r.t. staying lockstep with k/k. When would we cut a branch here? Before of after a kubernetes release? |
i think what should be done first is manually create all branches in the current support skew based on the current master HEAD:
and adapting test-infra jobs. IMO it's a good start for starting to manage changes targeting different releases. what repo creates the branch first is a good question. one problem when matching the k/k and k/release branch creation time is that k/release will also need FF, but i think this is the better option. possibly tooling one day can automatically do that for both repos. the alternative of creating a branch of k/release only after Kubernetes releases - e.g. create the 1.17 branch only after 1.16 releases, means that targeting changes in current k/release master will affect both 1.17 and 1.16 even after code thaw, which i think is not a good option. tags in k/release branches on the other hand are more difficult to solve. i think that k/release should not have tags as a start, because a k/release change should not break a previous patch release. this means that if a change is k/release is made for 1.14.4 it should not affect e.g. the artifacts for 1.14.3 - the artifacts for 1.14.3 are already out in GCS buckets, but the branches still leave the option to modify the future of the 1.14.x stream. 2c |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
TL;DR: My personal opinion is not to branch I think our tools should not really be dependant on the version of One of the places where this currently seems not to be true are the package specs. For this case I am of the opinion that those should def. life in
If we wanted we could also document the version of I also think it is OK for If we want to introduce automatic branching I guess we'd want something similar we do right now for The current tags ⓵ I know, not everything will or can be compiled on the spot. There is still stuff that we might need or want to download from some bucket or github or what have you. However, the information which version to download or where to get it should life in |
Thanks for the detailed write-up, @hoegaarden! ❤️ @kubernetes/release-engineering -- Soliciting feedback here. ref: https://groups.google.com/a/kubernetes.io/d/topic/release-managers/b55uFmJOUME/discussion |
I’m wondering if it would be good to tag the k/release repo simultaneously to k/k to have a direct link between them. For example, this way I could easily identify that my fix in v1.17.0 broke the release notes in v1.18.0. :) |
I agree with Hannes' ideal of k/release not being dependent on k/k. But there are sooo many implicit connections and assumptions we continue to discover and puzzle through. It would be great if we were able to fully manage correctly any variances, and do it with minimal conditional code in k/release using feature flags and a deprecation cycle that tracks k/k's. I see that as aspirational though and fear what the reality might become. In the short term I think it would be easier to sufficiently manage the unknown by peeling a release-X.YY branch off of k/release master in conjunction with k/k's branching. In that case deprecation is simply that an old branch in its entirely goes out of use when the corresponding branch of k/k goes out of support. Development would be on k/release master. Bug fixes might occasionally need cherry picked from k/release master to k/release release-X.YY. We could tag each branch with a k/k version string ahead of checking out that tag's k/release content for use to build the k/k version. We could also defer that branch creation the point we encounter an incompatible change that is not surmountable by adding a feature flag to conditionally only run the change for newer k/k builds. But in that case, with development happening on k/release master and insufficient ability to test/validate, we need some way of fixing the tooling used for a particular k/k build. Tagging stable points on k/release master seems like a way to do that. But the tools then need to consume k/release on a configurable tag instead of pulling and running master HEAD. |
SIG Docs is interested in a general solution for branching as well! |
Branching/tagging also raised on the kubeadm out-of-tree KEP: kubernetes/enhancements#1425 |
Summarizing our ask from the email discussion. SIG Scalability would be also interested in automatic branch cutting. |
👋 @JamesLaverack @evillgenius75 (w.r.t. the relnotes tool versioning) |
ref on merge-blocking issues:
|
I'm totally in favor of using tagged releases for k/release. We still have one drawback: Our tooling changes rapidly and we would not have a chance to fix issues as fast as we can do now. 🤔 |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle rotten |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Right now we're cutting releases if we think that we've got a fair amount of features in. I think we should not change that until we decide to cut a v1.0.0. Let's close this for now and decide how to move forward at a later point. |
@saschagrunert: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
It is unclear what the tags mean and when they're applied, eg:
This lifecycle management should be documented in process. Since they're v0 I suppose it's acceptable at this point that there's no acceptance test criteria or compatibility expectations, but it would be beneficial to aspire to less unexpected build breaking changes in the repo.
The text was updated successfully, but these errors were encountered: