Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

patch release team #331

Closed
tpepper opened this issue Oct 8, 2018 · 13 comments
Closed

patch release team #331

tpepper opened this issue Oct 8, 2018 · 13 comments
Assignees
Labels
kind/cleanup Categorizes issue or PR as related to cleaning up code, process, or technical debt. sig/release Categorizes an issue or PR as relevant to SIG Release.

Comments

@tpepper
Copy link
Member

tpepper commented Oct 8, 2018

We've been discussing a shift towards a patch release team instead of having one person (typically a Google employee) as the patch manager for each release currently supported. Today we have one person whose name is next to a release for 9 months. 1.10 is @MaciekPytel , 1.11 is @foxish , and 1.12 is @feiskyer . This leaves too much under documented, tribal knowledge, and requiring of heroics. We desire more people involved in the release engineering process, more shared responsibility/ownership, and more spread of workload in order to have a more sustainable workflow for the long run.

This issue will act as an umbrella to capture tasks to support this shift.

@tpepper tpepper added sig/release Categorizes an issue or PR as relevant to SIG Release. kind/cleanup Categorizes issue or PR as related to cleaning up code, process, or technical debt. labels Oct 8, 2018
@tpepper
Copy link
Member Author

tpepper commented Oct 8, 2018

/cc @castrojo @ttousai @sumitranr

@justaugustus
Copy link
Member

Thanks for kick-starting tracking this, @tpepper!
(cc-ing @spiffxp, as I know he had interest in seeing this happen.)

@dims
Copy link
Member

dims commented Oct 8, 2018

Items from a quick ad-hoc meeting:

  • @sumitranr (from google) will be helping with cutting debs/rpms (Thanks!)
  • @mbohlool will share some snippets from the script used today with @dims, the idea is to see how much of it can be integrated into the k/release repo
  • Ideally the existing scripts (run by patch managers) are already publishing debs/rpms (if not we need to make it happen). This part can be run by a non-googler
  • A googler would be needed to run a script that downloads the debs/rpms, signs them and uploads to final destination
  • We should try to replicate the apt repo in a CNCF gcs bucket for testing beta's/rc's etc. @calebamiles or @mbohlool to share the gcsweb link with @dims

Other folks on the call were @AishSundar @tpepper @randomvariable @spiffxp A bit more detailed notes are here - https://etherpad.openstack.org/p/dims-packaging

@feiskyer
Copy link
Member

feiskyer commented Oct 9, 2018

+1. Maybe the first step to automate debs/rpms cutting?

@randomvariable
Copy link
Member

I think so.

Are the broken 1.12.1-02 debs still available anywhere?

@spiffxp
Copy link
Member

spiffxp commented Oct 9, 2018

One thing that came up during today's SIG release meeting: it would be ideal to have some kind of schedule for patches being cut. Docs call out "every 2 to 4 weeks" but that's just not as predictable and leads to people having to ask when the next patch is due. Patch release managers should still have the freedom to cut off-schedule if there are urgent/critical fixes, but otherwise I think a schedule published ahead of time would be more predictable for all involved

@sumitranr
Copy link

sumitranr commented Oct 9, 2018 via email

@MaciekPytel
Copy link
Contributor

+1 for predictable schedule. I just want to point out that the number of cherry-picks and community appetite for patch releases changes during release lifecycle and the schedule should reflect that. In my experience it was closer to "every 2 weeks" for the first 6 or so patch releases and it's more "every 4 weeks" now.

@dims
Copy link
Member

dims commented Dec 6, 2018

/unassign @dims

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Mar 6, 2019
@tpepper
Copy link
Member Author

tpepper commented Mar 7, 2019

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Mar 7, 2019
@justaugustus
Copy link
Member

Closing in favor of #369.
/close

@k8s-ci-robot
Copy link
Contributor

@justaugustus: Closing this issue.

In response to this:

Closing in favor of #369.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/cleanup Categorizes issue or PR as related to cleaning up code, process, or technical debt. sig/release Categorizes an issue or PR as relevant to SIG Release.
Projects
None yet
Development

No branches or pull requests