Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Discuss k/release repo tagging and branching policies #857

Closed
tpepper opened this issue Aug 21, 2019 · 21 comments
Closed

Discuss k/release repo tagging and branching policies #857

tpepper opened this issue Aug 21, 2019 · 21 comments
Assignees
Labels
area/release-eng Issues or PRs related to the Release Engineering subproject kind/documentation Categorizes issue or PR as related to documentation. lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. sig/release Categorizes an issue or PR as relevant to SIG Release.
Milestone

Comments

@tpepper
Copy link
Member

tpepper commented Aug 21, 2019

It is unclear what the tags mean and when they're applied, eg:

$ git tag
v0.1.0
v0.1.1
v0.1.2
v0.1.3

This lifecycle management should be documented in process. Since they're v0 I suppose it's acceptable at this point that there's no acceptance test criteria or compatibility expectations, but it would be beneficial to aspire to less unexpected build breaking changes in the repo.

@tpepper tpepper added the area/release-eng Issues or PRs related to the Release Engineering subproject label Aug 21, 2019
@justaugustus
Copy link
Member

Thanks for capturing this, Tim!
So far, my policy has been, "We're about to make a change that could do weird things, let's tag so we have a sane place to go if things go wrong", but that's not really much of a policy at all.

Let's brainstorm.

/assign @justaugustus @calebamiles @tpepper
/priority important-soon
/kind documentation
/milestone v1.16

@k8s-ci-robot k8s-ci-robot added this to the v1.16 milestone Aug 21, 2019
@k8s-ci-robot k8s-ci-robot added priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. kind/documentation Categorizes issue or PR as related to documentation. labels Aug 21, 2019
@justaugustus justaugustus changed the title document k/release repo tagging policy Discuss k/release repo and branching policies Aug 23, 2019
@justaugustus justaugustus changed the title Discuss k/release repo and branching policies Discuss k/release repo tagging and branching policies Aug 23, 2019
@tpepper
Copy link
Member Author

tpepper commented Aug 23, 2019

In my humble opinion, the release code will be most maintainable and end user experience (those consuming our generated artifacts) most consistent if the release code is branched in conjunction with k/k.

@justaugustus
Copy link
Member

justaugustus commented Aug 23, 2019

I was thinking the same w.r.t. staying lockstep with k/k.
.0 only or all patches?

When would we cut a branch here? Before of after a kubernetes release?
Maybe integrate this into our current tooling so we automatically cut a release in k/r when we release k/k?

@neolit123
Copy link
Member

neolit123 commented Aug 23, 2019

i think what should be done first is manually create all branches in the current support skew based on the current master HEAD:

  • release-1.13,
  • release-1.14
  • release-1.15
  • release-1.16
  • release-1.17

and adapting test-infra jobs. IMO it's a good start for starting to manage changes targeting different releases.

what repo creates the branch first is a good question.

one problem when matching the k/k and k/release branch creation time is that k/release will also need FF, but i think this is the better option. possibly tooling one day can automatically do that for both repos.

the alternative of creating a branch of k/release only after Kubernetes releases - e.g. create the 1.17 branch only after 1.16 releases, means that targeting changes in current k/release master will affect both 1.17 and 1.16 even after code thaw, which i think is not a good option.

tags in k/release branches on the other hand are more difficult to solve.
while k/k can see a lot of tags and changes, k/release can be more stale.

i think that k/release should not have tags as a start, because a k/release change should not break a previous patch release. this means that if a change is k/release is made for 1.14.4 it should not affect e.g. the artifacts for 1.14.3 - the artifacts for 1.14.3 are already out in GCS buckets, but the branches still leave the option to modify the future of the 1.14.x stream.

2c

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Nov 21, 2019
@justaugustus justaugustus modified the milestones: v1.16, v1.18 Dec 4, 2019
@justaugustus justaugustus added sig/release Categorizes an issue or PR as relevant to SIG Release. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Dec 4, 2019
@hoegaarden
Copy link
Contributor

TL;DR: My personal opinion is not to branch k/release, if we can get away with it -- which I think we can.

I think our tools should not really be dependant on the version of k/k. And as far as I can remember from the parts I have seen, this should be relatively easy without ending up with tons of nested ifs and switch/casees.

One of the places where this currently seems not to be true are the package specs. For this case I am of the opinion that those should def. life in k/k anyway and I imagine the following workflow for building packages:

  • checkout k/k in the desired revision
  • checkout k/release:master or get the latest container with the tools from k/release or compile the latest thing from k/release
  • build all things k/k
  • use kubepkg which consumes the packaging specs from the k/k-checkout and packages the binaries which have just been compiled⓵

If we wanted we could also document the version of k/release we have used to cut a version of k/k by (automatically) adding a tag like kubernetes-v1.17.0-alpha.3 to the revision of k/release we used. We can also document / log that somewhere else.

I also think it is OK for k/release to support the same versions as k/k at any point in time. E.g. I do not expect k/release to do the right things and work correctly today when I run its tools for / against k/k:v1.3.0. So if we need some feature flags or branches in k/release we could remove them after ~9 months right now.

If we want to introduce automatic branching I guess we'd want something similar we do right now for k/k:
When we cut the first beta, anago creates a new branch for k/k, branched off of master. We / anago / krel / GCB could do the same for other repos.
However I guess we can only do so "blindly". anago would not have any idea which revision of k-sigs/my-external-thing is compatible with which version of k/k. I think this branch would only be a signal to the k-sigs/my-external-thing maintainers that something happened upstream they might want to care about. And that might be fine and useful for people, e.g. for the k8s.io/perf-test use case? For everything else (e.g. tagging other alphas, non-first-betas, rcs, officials, ...) neither k/k nor k/release would have much of an idea which branch/revision of an external repo is compatible with k/k and what should be tagged. This is IMHO the responsibility of the CI system / the release pipeline of the external repo.

The current tags v0.1.* was IIRC just a quick way to say: OK, we are planning bigger refactors and because we don't have tests and cannot guarantee much we mark the point where we know the thing worked. E.g. the tag v0.2.0 has the message "Release tooling snapshot for Kubernetes v1.17.0". This is not that easy to discover but better than nothing. If we tag we should try to continue to capture intent in the tag's message (or make the tag itself self explanatory, by using something like mentioned above, e.g. kubernetes-v1.17.0-alpha.3).

⓵ I know, not everything will or can be compiled on the spot. There is still stuff that we might need or want to download from some bucket or github or what have you. However, the information which version to download or where to get it should life in k/k and be bumped there, and not in k/release.

@justaugustus
Copy link
Member

Thanks for the detailed write-up, @hoegaarden! ❤️

@kubernetes/release-engineering -- Soliciting feedback here.

ref: https://groups.google.com/a/kubernetes.io/d/topic/release-managers/b55uFmJOUME/discussion
cc: @mm4tt @nikhita @sttts @dims @liggitt

@saschagrunert
Copy link
Member

I’m wondering if it would be good to tag the k/release repo simultaneously to k/k to have a direct link between them.

For example, this way I could easily identify that my fix in v1.17.0 broke the release notes in v1.18.0. :)

@tpepper
Copy link
Member Author

tpepper commented Jan 7, 2020

I agree with Hannes' ideal of k/release not being dependent on k/k. But there are sooo many implicit connections and assumptions we continue to discover and puzzle through. It would be great if we were able to fully manage correctly any variances, and do it with minimal conditional code in k/release using feature flags and a deprecation cycle that tracks k/k's. I see that as aspirational though and fear what the reality might become.

In the short term I think it would be easier to sufficiently manage the unknown by peeling a release-X.YY branch off of k/release master in conjunction with k/k's branching. In that case deprecation is simply that an old branch in its entirely goes out of use when the corresponding branch of k/k goes out of support. Development would be on k/release master. Bug fixes might occasionally need cherry picked from k/release master to k/release release-X.YY. We could tag each branch with a k/k version string ahead of checking out that tag's k/release content for use to build the k/k version.

We could also defer that branch creation the point we encounter an incompatible change that is not surmountable by adding a feature flag to conditionally only run the change for newer k/k builds. But in that case, with development happening on k/release master and insufficient ability to test/validate, we need some way of fixing the tooling used for a particular k/k build. Tagging stable points on k/release master seems like a way to do that. But the tools then need to consume k/release on a configurable tag instead of pulling and running master HEAD.

@justaugustus
Copy link
Member

SIG Docs is interested in a general solution for branching as well!
cc: @zacharysarah
xref: kubernetes/test-infra#15779

@justaugustus
Copy link
Member

Branching/tagging also raised on the kubeadm out-of-tree KEP: kubernetes/enhancements#1425

@mm4tt
Copy link

mm4tt commented Jan 24, 2020

Summarizing our ask from the email discussion.

SIG Scalability would be also interested in automatic branch cutting.
Our use case is quite simple, we'd like to have a k/perf-tests branch (based off k/perfs-test master HEAD) cut for every minor release branch in k/k repo.
Currently we need to do it automatically and only @wojtek-t has permissions to do so. Having some kind of automation around it would be really helpful.

@justaugustus
Copy link
Member

👋 @JamesLaverack @evillgenius75 (w.r.t. the relnotes tool versioning)

@justaugustus
Copy link
Member

ref on merge-blocking issues:

Sounds good to me. The other way around we could stick to certain releases in the next cycle, so we would not have a need for a merge freeze. WDYT?

@saschagrunert -- That's what I had in mind as well, but it's a process change in a few places, so we need to think through a little bit e.g., Release Managers need to check out k/release@<tag> and images need to be versioned against the tag, which means they to be rebuilt at the tag cut and then promoted.

Let's discuss some of this here --> #857

@justaugustus justaugustus pinned this issue Mar 25, 2020
@saschagrunert
Copy link
Member

I'm totally in favor of using tagged releases for k/release. We still have one drawback: Our tooling changes rapidly and we would not have a chance to fix issues as fast as we can do now. 🤔

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jun 24, 2020
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jul 24, 2020
@jimangel
Copy link
Member

/remove-lifecycle rotten

@k8s-ci-robot k8s-ci-robot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Aug 23, 2020
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Nov 21, 2020
@saschagrunert
Copy link
Member

Right now we're cutting releases if we think that we've got a fair amount of features in. I think we should not change that until we decide to cut a v1.0.0. Let's close this for now and decide how to move forward at a later point.
/close

@k8s-ci-robot
Copy link
Contributor

@saschagrunert: Closing this issue.

In response to this:

Right now we're cutting releases if we think that we've got a fair amount of features in. I think we should not change that until we decide to cut a v1.0.0. Let's close this for now and decide how to move forward at a later point.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@justaugustus justaugustus unpinned this issue Feb 15, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/release-eng Issues or PRs related to the Release Engineering subproject kind/documentation Categorizes issue or PR as related to documentation. lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. sig/release Categorizes an issue or PR as relevant to SIG Release.
Projects
None yet
Development

No branches or pull requests

10 participants