Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Addon management layer #18

Closed
18 tasks
errordeveloper opened this issue Jun 28, 2016 · 83 comments
Closed
18 tasks

Addon management layer #18

errordeveloper opened this issue Jun 28, 2016 · 83 comments
Assignees
Labels
kind/feature Categorizes issue or PR as related to a new feature. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. sig/cluster-lifecycle Categorizes an issue or PR as relevant to SIG Cluster Lifecycle. stage/alpha Denotes an issue tracking an enhancement targeted for Alpha status tracked/no Denotes an enhancement issue is NOT actively being tracked by the Release Team

Comments

@errordeveloper
Copy link
Member

errordeveloper commented Jun 28, 2016

Feature Description

  • One-line feature description (can be used as a release note): Support for system addon management
  • Primary contact (assignee): @justinsb
  • Responsible SIGs: @kubernetes/sig-cluster-lifecycle-feature-requests
  • Design proposal link (community repo):
  • Reviewer(s) - (for LGTM) recommend having 2+ reviewers (at least one from code-area OWNERS file) agreed to review. Reviewers from multiple companies preferred:
  • Approver (likely from SIG/area to which feature belongs):
  • Feature target (which target equals to which milestone):
    • Alpha release target (x.y) 1.8
    • Beta release target (x.y) 1.9
    • Stable release target (x.y) 1.10
# Description

Currently add-on management is baked in kube-up.sh and is subject to various kinds of issues. Current implementation lacks transparency, clear specification and any separation from other layers, among other more technical downsides. This part of the effort towards #11, and is a fairly easy one to tackle.

As discussed in kubernetes-retired/kubernetes-anywhere#126, it looks like a low-hanging fruit, and existing projects can be easily used to pave the way. More specifically, the assumption is that Helm should work pretty well for this, and only a thin layer of automation would be required to make it seamless.

Progress Tracker

  • Before Alpha
    • Design Approval
      • Design Proposal. This goes under docs/proposals. Doing a proposal as a PR allows line-by-line commenting from community, and creates the basis for later design documentation. Paste link to merged design proposal here:
      • Initial API review (if API). Maybe same PR as design doc. PR-NUMBER
        • Any code that changes an API (/pkg/apis/...)
        • cc @kubernetes/api
    • Write (code + tests + docs) then get them merged. ALL-PR-NUMBERS
      • Code needs to be disabled by default. Verified by code OWNERS
      • Minimal testing
      • Minimal docs
        • cc @kubernetes/docs on docs PR
        • cc @kubernetes/feature-reviewers on this issue to get approval before checking this off
        • New apis: Glossary Section Item in the docs repo: kubernetes/kubernetes.github.io
      • Update release notes
  • Before Beta
    • Testing is sufficient for beta
    • User docs with tutorials
      • Updated walkthrough / tutorial in the docs repo: kubernetes/kubernetes.github.io
      • cc @kubernetes/docs on docs PR
      • cc @kubernetes/feature-reviewers on this issue to get approval before checking this off
    • Thorough API review
      • cc @kubernetes/api
  • Before Stable
    • docs/proposals/foo.md moved to docs/design/foo.md
      • cc @kubernetes/feature-reviewers on this issue to get approval before checking this off
    • Soak, load testing
    • detailed user docs and examples
      • cc @kubernetes/docs
      • cc @kubernetes/feature-reviewers on this issue to get approval before checking this off

FEATURE_STATUS is used for feature tracking and to be updated by @kubernetes/feature-reviewers.
FEATURE_STATUS: IN_DEVELOPMENT

More advice:

Design

  • Once you get LGTM from a @kubernetes/feature-reviewers member, you can check this checkbox, and the reviewer will apply the "design-complete" label.

Coding

  • Use as many PRs as you need. Write tests in the same or different PRs, as is convenient for you.
  • As each PR is merged, add a comment to this issue referencing the PRs. Code goes in the http://github.com/kubernetes/kubernetes repository,
    and sometimes http://github.com/kubernetes/contrib, or other repos.
  • When you are done with the code, apply the "code-complete" label.
  • When the feature has user docs, please add a comment mentioning @kubernetes/feature-reviewers and they will
    check that the code matches the proposed feature and design, and that everything is done, and that there is adequate
    testing. They won't do detailed code review: that already happened when your PRs were reviewed.
    When that is done, you can check this box and the reviewer will apply the "code-complete" label.

Docs

  • Write user docs and get them merged in.
  • User docs go into http://github.com/kubernetes/kubernetes.github.io.
  • When the feature has user docs, please add a comment mentioning @kubernetes/docs.
  • When you get LGTM, you can check this checkbox, and the reviewer will apply the "docs-complete" label.
@errordeveloper
Copy link
Member Author

cc @kubernetes/sig-cluster-lifecycle @kubernetes/helm-maintainers

@mikedanese
Copy link
Member

@bgrant0607
Copy link
Member

cc @philips

@bgrant0607
Copy link
Member

ref kubernetes/kubernetes#7459

@luxas
Copy link
Member

luxas commented Jul 4, 2016

@mikedanese @errordeveloper Do you have any proposals for a new API, or should this be done via automation/helm or some other thing?

At least I'd like to see a kubectl command (like @justinsb did), ideally the user could just toggle predefined (built-in) addons, or apply custom ones.

@errordeveloper
Copy link
Member Author

@luxas so far the idea is to use Helm, and avoid creating new machinery. There would be a very thin layer on top of Helm, which would take some kind of object describing the list of addons (with optional parameters), such object would be supplied by preceding turnup phase.

@errordeveloper
Copy link
Member Author

To be honest, I am little puzzled with the logistics of the next step, as the tick boxes above suggest I should create a formal proposal PR to the main repo, but seem more natural for this work to go ether into "kube-deploy" or a separate repo... thoughts?

@aronchick
Copy link
Contributor

IMO, I think the repo for the proposal could go anywhere that makes sense -
that's a bit of an implementation detail. We should update the template to
work in whatever repo is the right one (in this case, it should probably go
in 'kubernetes-anywhere', i'd imagine).

I think kube-deploy may be a bit out of date.

On Mon, Jul 4, 2016 at 9:52 AM, Ilya Dmitrichenko [email protected]
wrote:

To be honest, I am little puzzled with the logistics of the next step, as
the tick boxes above suggest I should create a formal proposal PR to the
main repo, but seem more natural for this work to go ether into
"kube-deploy" or a separate repo... thoughts?


You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
#18 (comment),
or mute the thread
https://github.com/notifications/unsubscribe/AADIdRPRQLeR-_kqmdpAuGdguLK8weXWks5qSTpYgaJpZM4JATJo
.

@errordeveloper
Copy link
Member Author

Thanks, @aronchick 👍

@mikedanese
Copy link
Member

The point of abstracting an addon manager is to reduce the duplicated effort across deployment automations that everyone goes through building addon management. It feels weird to shove the proposal in kubernetes-anywhere since this feature won't be implemented in kuberentes-anywhere. Can we find a centralized place to post design proposals? How about this repo? @pwittrock

@jbeda
Copy link
Contributor

jbeda commented Jul 5, 2016

+1 to what @mikedanese says. We need to view this as a cross-cutting thing that isn't tied to any particular deployment tool.

Note that we essentially have 2 types of addons. Ones that are pretty much necessary to make a cluster functional (DNS) and ones that are actually optional.

I question using a tool like helm for the addons in the first category. It is a big dependency to take on for every kubernetes install and doing so would essentially bring it in to the "core". We should be able to find a lighter weight solution for deploying and managing DNS.

@aronchick
Copy link
Contributor

Interesting - I hadn't thought of this to be the central place for
proposals, but I don't see any reason why that would break. The only
problem (of course) is that we would need to enforce that elsewhere -
otherwise you then have 15 repos for proposals PLUS this centralized one
which either leads to people missing stuff or duplication.

Open to either suggestion.

On Tue, Jul 5, 2016 at 9:21 AM, Joe Beda [email protected] wrote:

+1 to what @mikedanese https://github.com/mikedanese says. We need to
view this as a cross-cutting thing that isn't tied to any particular
deployment tool.

Note that we essentially have 2 types of addons. Ones that are pretty much
necessary to make a cluster functional (DNS) and ones that are actually
optional.

I question using a tool like helm for the addons in the first category. It
is a big dependency to take on for every kubernetes install and doing so
would essentially bring it in to the "core". We should be able to find a
lighter weight solution for deploying and managing DNS.


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
#18 (comment),
or mute the thread
https://github.com/notifications/unsubscribe/AADIdbEgbcFsqUcAD0xYgWPP2Bc8w9xGks5qSoSXgaJpZM4JATJo
.

@erictune
Copy link
Member

erictune commented Jul 6, 2016

I would like people to be able to subscribe to kubernetes/features and see
a relatively succinct stream of updates on feature progress.

That will not be possible if there are design proposal PRs being discussed
in the repo.

So, please not in this repo

On Jul 5, 2016 9:41 AM, "David Aronchick" [email protected] wrote:

Interesting - I hadn't thought of this to be the central place for
proposals, but I don't see any reason why that would break. The only
problem (of course) is that we would need to enforce that elsewhere -
otherwise you then have 15 repos for proposals PLUS this centralized one
which either leads to people missing stuff or duplication.

Open to either suggestion.

On Tue, Jul 5, 2016 at 9:21 AM, Joe Beda [email protected] wrote:

+1 to what @mikedanese https://github.com/mikedanese says. We need to
view this as a cross-cutting thing that isn't tied to any particular
deployment tool.

Note that we essentially have 2 types of addons. Ones that are pretty
much
necessary to make a cluster functional (DNS) and ones that are actually
optional.

I question using a tool like helm for the addons in the first category.
It
is a big dependency to take on for every kubernetes install and doing so
would essentially bring it in to the "core". We should be able to find a
lighter weight solution for deploying and managing DNS.


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#18 (comment)
,
or mute the thread
<
https://github.com/notifications/unsubscribe/AADIdbEgbcFsqUcAD0xYgWPP2Bc8w9xGks5qSoSXgaJpZM4JATJo

.


You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
#18 (comment),
or mute the thread
https://github.com/notifications/unsubscribe/AHuudk70H0xwEJbWPbOd0RRimpGV_55aks5qSokYgaJpZM4JATJo
.

@idvoretskyi idvoretskyi modified the milestone: v1.4 Jul 18, 2016
@idvoretskyi idvoretskyi added the sig/cluster-lifecycle Categorizes an issue or PR as relevant to SIG Cluster Lifecycle. label Aug 4, 2016
@goltermann goltermann modified the milestones: v1.5, v1.4 Aug 17, 2016
@errordeveloper
Copy link
Member Author

xref kubernetes/k8s.io#5

@bgrant0607 bgrant0607 mentioned this issue Oct 11, 2016
23 tasks
@idvoretskyi
Copy link
Member

@errordeveloper @kubernetes/sig-cluster-lifecycle can you clarify the actual status of the feature?

@idvoretskyi
Copy link
Member

@errordeveloper @kubernetes/sig-cluster-lifecycle any update on this?

@chrislovecnm
Copy link

This is a big deal, big need in the community. I would hope that we would use helm, since it is being used a ton. Happy to talk through use cases.

@chrislovecnm
Copy link

@idvoretskyi all of us are just getting back from kubecon btw 😀

@erictune
Copy link
Member

From reading kubernetes/kubernetes#29551, it seems this is going to use kubectl apply?

@idvoretskyi
Copy link
Member

@errordeveloper @kubernetes/sig-cluster-lifecycle does this feature target alpha, beta or stable for 1.5?

@luxas
Copy link
Member

luxas commented Nov 17, 2016

If we have time for this in the v1.6 timeframe I think it will be alpha.

@erictune Yes, basic manifest installation will be with kubectl apply, but we might consider a more advanced alternative as well (maybe using helm as the base, we haven't decided anything yet)

@erictune erictune removed this from the v1.5 milestone Nov 17, 2016
@palnabarun
Copy link
Member

The current release schedule is:

  • Monday, January 6th - Release Cycle Begins
  • Tuesday, January 28th EOD PST - Enhancements Freeze
  • Thursday, March 5th, EOD PST - Code Freeze
  • Monday, March 16th - Docs must be completed and reviewed
  • Tuesday, March 24th - Kubernetes 1.18.0 Released

@palnabarun
Copy link
Member

@justinsb Just a friendly reminder, we are just 7 days away from the Enhancement Freeze (Tuesday, January 28th).

@palnabarun
Copy link
Member

@justinsb Just a friendly reminder, we are just 2 days away from the Enhancement Freeze (3 PM Pacific Time, Tuesday, January 28th).

@palnabarun
Copy link
Member

Unfortunately, the deadline for the 1.18 Enhancement freeze has passed. For now, this is being removed from the milestone. If there is a need to get this in, please file an enhancement exception.

ingvagabund pushed a commit to ingvagabund/enhancements that referenced this issue Apr 2, 2020
formalize what a must-gather image includes
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 28, 2020
@palnabarun
Copy link
Member

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 28, 2020
@msedzins
Copy link

Hey there @justinsb -- 1.19 Enhancements shadow here. I wanted to check in and see if you think this Enhancement will be graduating in 1.19?

In order to have this part of the release:

  1. The KEP PR must be merged in an implementable state
  2. The KEP must have test plans
  3. The KEP must have graduation criteria.

The current release schedule is:

  • Monday, April 13: Week 1 - Release cycle begins
  • Tuesday, May 19: Week 6 - Enhancements Freeze
  • Thursday, June 25: Week 11 - Code Freeze
  • Thursday, July 9: Week 14 - Docs must be completed and reviewed
  • Tuesday, August 4: Week 17 - Kubernetes v1.19.0 released
  • Thursday, August 20: Week 19 - Release Retrospective

If you do, I'll add it to the 1.19 tracking sheet (http://bit.ly/k8s-1-19-enhancements). Once coding begins please list all relevant k/k PRs in this issue so they can be tracked properly. 👍

Thanks!

@msedzins
Copy link

Hi there @justinsb,

Kind reminder about my question above.

Regards,
Mirek

2 similar comments
@msedzins
Copy link

msedzins commented May 4, 2020

Hi there @justinsb,

Kind reminder about my question above.

Regards,
Mirek

@msedzins
Copy link

msedzins commented May 9, 2020

Hi there @justinsb,

Kind reminder about my question above.

Regards,
Mirek

@msedzins
Copy link

Hey @justinsb , Enhancement shadow for the v1.19 release cycle here. Just following up on my earlier update to inform you of the
upcoming Enhancement Freeze scheduled on Tuesday, May 19.

Regards,
Mirek

@palnabarun
Copy link
Member

@justinsb -- Unfortunately the deadline for the 1.19 Enhancement freeze has passed. For now this is being removed from the milestone and 1.19 tracking sheet. If there is a need to get this in, please file an enhancement exception.

brahmaroutu added a commit to brahmaroutu/enhancements that referenced this issue Jul 29, 2020
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Aug 18, 2020
@fabriziopandini
Copy link
Member

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Aug 18, 2020
@kikisdeliveryservice
Copy link
Member

Hi @justinsb

Enhancements Lead here. Any plans for this in 1.20?

Thanks,
Kirsten

@kikisdeliveryservice
Copy link
Member

Hi @justinsb

Any updates on this for 1.20? Enhancements Freeze is October 6th and by that time we require a merged KEP in an implementable state with test plans and graduation criteria. I currently don't see a related KEP for this.

If you plan on this graduating in 1.20, please let me know.

Best,
Kirsten

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 27, 2020
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jan 26, 2021
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-contributor-experience at kubernetes/community.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-contributor-experience at kubernetes/community.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/feature Categorizes issue or PR as related to a new feature. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. sig/cluster-lifecycle Categorizes an issue or PR as relevant to SIG Cluster Lifecycle. stage/alpha Denotes an issue tracking an enhancement targeted for Alpha status tracked/no Denotes an enhancement issue is NOT actively being tracked by the Release Team
Projects
None yet
Development

No branches or pull requests