Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RPMs should be gated with kubeadm and then tagged #305

Open
sdake opened this issue Mar 30, 2017 · 19 comments
Open

RPMs should be gated with kubeadm and then tagged #305

sdake opened this issue Mar 30, 2017 · 19 comments
Labels
area/release-eng Issues or PRs related to the Release Engineering subproject help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. needs-kind Indicates a PR lacks a `kind/foo` label and requires one. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. sig/release Categorizes an issue or PR as relevant to SIG Release.
Milestone

Comments

@sdake
Copy link

sdake commented Mar 30, 2017

This repository builds kubeadm RPMs, however nothing is gated as evidenced by the chaos caused during the release of 1.6 of kubernetes. Constantly gating the built RPMS would validate their correctness.

In upstream kolla-kubernetes (http://github.com/openstack/kolla-kubernetes) we do gate the Kubernetes generated RPMs - perhaps there is something useful to be learned from our gating tools.

We have a couple work in progress patches which might also help correct some issues found in the existing packaging:

As kubeadm beta was deleted, I attempted to build from this repository what I thought were the correct RPMs. Since nothing was tagged in this repository, it was possible this was done incorrectly. Anything without "multi" in it indicates a kubeadm init only was used. A multi gate job means kubeadm join is used (which blocks on a certificate failure).
https://review.openstack.org/#/c/451556/

Kubernetes 1.6.0
https://review.openstack.org/#/c/451391/

@mikedanese
Copy link
Member

What does gating mean?

@kfox1111
Copy link
Contributor

There were several issues with the handling of the issues in 1.6.0.
rpms treat release-0 as less then release-0.alpha. final releases should be release-1
rather then fix 1, the solution used was to delete the alpha rpm. That doesn't work for several reasons:

  1. people have cached copies
  2. people have installed the alpha, then the system wont ever try and downgrade
  3. if deleted, you can never go back to the previous version.
  4. 1.6.0-0 can't deploy 1.5.x properly because of rpm deps.

1.5.x was stable, 1.6.0 was just released, and as common, x.0 was buggy. So operators need a way to pin back to 1.5.x until 1.6.1 is released.

@kfox1111
Copy link
Contributor

@mikedanese Some kind of automated job that blocks release until the artefacts are tested for functionality.

@mikedanese mikedanese self-assigned this Apr 1, 2017
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

Prevent issues from auto-closing with an /lifecycle frozen comment.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 22, 2017
@gtirloni
Copy link

I think this was addressed in the last few releases, wasn't it?

$ repoquery --disablerepo='*' --enablerepo='kubernetes' --show-duplicates -q 'kubelet*'
kubelet-0:1.5.4-0.x86_64
kubelet-0:1.5.4-1.x86_64
kubelet-0:1.6.0-0.x86_64
kubelet-0:1.6.0-1.x86_64
kubelet-0:1.6.1-0.x86_64
kubelet-0:1.6.1-1.x86_64
kubelet-0:1.6.2-0.x86_64
kubelet-0:1.6.2-1.x86_64
kubelet-0:1.6.3-0.x86_64
kubelet-0:1.6.3-1.x86_64
kubelet-0:1.6.4-0.x86_64
kubelet-0:1.6.4-1.x86_64
kubelet-0:1.6.5-0.x86_64
kubelet-0:1.6.5-1.x86_64
kubelet-0:1.6.6-0.x86_64
kubelet-0:1.6.6-1.x86_64
kubelet-0:1.6.7-0.x86_64
kubelet-0:1.6.7-1.x86_64
kubelet-0:1.6.8-0.x86_64
kubelet-0:1.6.8-1.x86_64
kubelet-0:1.6.9-0.x86_64
kubelet-0:1.6.9-1.x86_64
kubelet-0:1.6.10-0.x86_64
kubelet-0:1.6.10-1.x86_64
kubelet-0:1.6.11-0.x86_64
kubelet-0:1.6.11-1.x86_64
kubelet-0:1.6.12-0.x86_64
kubelet-0:1.6.12-1.x86_64
kubelet-0:1.6.13-0.x86_64
kubelet-0:1.6.13-1.x86_64
kubelet-0:1.7.0-0.x86_64
kubelet-0:1.7.0-1.x86_64
kubelet-0:1.7.1-0.x86_64
kubelet-0:1.7.1-1.x86_64
kubelet-0:1.7.2-0.x86_64
kubelet-0:1.7.2-1.x86_64
kubelet-0:1.7.3-1.x86_64
kubelet-0:1.7.3-2.x86_64
kubelet-0:1.7.4-0.x86_64
kubelet-0:1.7.4-1.x86_64
kubelet-0:1.7.5-0.x86_64
kubelet-0:1.7.5-1.x86_64
kubelet-0:1.7.6-1.x86_64
kubelet-0:1.7.6-2.x86_64
kubelet-0:1.7.7-1.x86_64
kubelet-0:1.7.7-2.x86_64
kubelet-0:1.7.8-1.x86_64
kubelet-0:1.7.8-2.x86_64
kubelet-0:1.7.9-0.x86_64
kubelet-0:1.7.9-1.x86_64
kubelet-0:1.7.10-0.x86_64
kubelet-0:1.7.10-1.x86_64
kubelet-0:1.7.11-0.x86_64
kubelet-0:1.7.11-1.x86_64
kubelet-0:1.8.0-0.x86_64
kubelet-0:1.8.0-1.x86_64
kubelet-0:1.8.1-0.x86_64
kubelet-0:1.8.1-1.x86_64
kubelet-0:1.8.2-0.x86_64
kubelet-0:1.8.2-1.x86_64
kubelet-0:1.8.3-0.x86_64
kubelet-0:1.8.3-1.x86_64
kubelet-0:1.8.4-0.x86_64
kubelet-0:1.8.4-1.x86_64
kubelet-0:1.8.5-0.x86_64
kubelet-0:1.8.5-1.x86_64
kubelet-0:1.8.6-0.x86_64
kubelet-0:1.9.0-0.x86_64

@sdake
Copy link
Author

sdake commented Dec 27, 2017

@gtirloni I can't say for sure. The basic issue I filed was that kubelet and friends are not run inside any type of CI against the built RPMs. The fact that the RPMs are present (i.e. history is not lost) is a bit orthogonal to the actual issue. However, some folks did pile on and indicate the lack of historic RPMs was a real problem for the general kubernetes-consuming community.

That said, every time I see an rpm with -0 in the tag, I cry a little inside. Packages should never have x.y.z-alpha where alpha == 0. This is a historic holdover to general problems with RPM. @kfox1111 could add more detail.

Cheers
-steve

@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jan 26, 2018
@kfox1111
Copy link
Contributor

/remove-lifecycle rotten

@k8s-ci-robot k8s-ci-robot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Jan 26, 2018
@0xmichalis
Copy link

/lifecycle frozen

@k8s-ci-robot k8s-ci-robot added the lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. label Feb 2, 2018
marpaia pushed a commit to marpaia/release that referenced this issue Feb 21, 2019
Removes outdated reference to github munger
@justaugustus
Copy link
Member

/help
/milestone next
/priority important-longterm
/area release-eng

@k8s-ci-robot
Copy link
Contributor

@justaugustus:
This request has been marked as needing help from a contributor.

Please ensure the request meets the requirements listed here.

If this request no longer meets these requirements, the label can be removed
by commenting with the /remove-help command.

In response to this:

/help
/milestone next
/priority important-longterm
/area release-eng

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot added the priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. label May 1, 2019
@k8s-ci-robot k8s-ci-robot added this to the next milestone May 1, 2019
@k8s-ci-robot k8s-ci-robot added area/release-eng Issues or PRs related to the Release Engineering subproject help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. labels May 1, 2019
@justaugustus justaugustus added the sig/release Categorizes an issue or PR as relevant to SIG Release. label Dec 9, 2019
@xmudrii
Copy link
Member

xmudrii commented May 7, 2020

@mikedanese Hello. I see that you are assigned to this issue. I'm wondering is there any update to this - would it be possible to do something about it or should we maybe unassign/reassign the issue?

@justaugustus
Copy link
Member

We're discussing tagging/release policies in #857.
If this issue is still relevant, please feel free to reopen with updated status.

/close

@k8s-ci-robot
Copy link
Contributor

@justaugustus: Closing this issue.

In response to this:

We're discussing tagging/release policies in #857.
If this issue is still relevant, please feel free to reopen with updated status.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@sdake
Copy link
Author

sdake commented May 13, 2020

/reopen

@justaugustus as far as I am aware, no CI testing is done against generated RPMs. The proposal in #857 does not address the unmet requirement. As such, I am re-opening this issue.

@kfox1111 in particular has more interest in this topic than I do.

@k8s-ci-robot k8s-ci-robot reopened this May 13, 2020
@k8s-ci-robot
Copy link
Contributor

@sdake: Reopened this issue.

In response to this:

/reopen

@justaugustus as far as I am aware, no CI testing is done against generated RPMs. The proposal in #857 does not address the unmet requirement. As such, I am re-opening this issue.

@kfox1111 in particular has more interest in this topic than I do.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot added the needs-kind Indicates a PR lacks a `kind/foo` label and requires one. label May 13, 2020
@mikedanese
Copy link
Member

/unassign

@lasomethingsomething
Copy link

Hello @sdake and @kfox1111: Almost a year has passed since the last comment. What is your current view on this issue? If it's still needed, do you have bandwidth to help make a contribution that would push it along? If you need information about how to do that, please reach out here and we can offer some guidance.

@kfox1111
Copy link
Contributor

For those sites that are big enough, they now mirror the repo and test on a test cluster before deploying to production.

For smaller sites, they may still point at upstream repos directly in prod and run into this issue. Its still an issue for them I think.

I've got enough clusters now I'm in the former category though. I don't currently have enough spare cycles to fix the issue when I don't personally have it. Sorry.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/release-eng Issues or PRs related to the Release Engineering subproject help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. needs-kind Indicates a PR lacks a `kind/foo` label and requires one. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. sig/release Categorizes an issue or PR as relevant to SIG Release.
Projects
None yet
Development

No branches or pull requests

10 participants