Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Further differentiate performance characteristics associated with pod level QoS #276

Closed
derekwaynecarr opened this issue Apr 25, 2017 · 21 comments
Assignees
Labels
kind/feature Categorizes issue or PR as related to a new feature. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. sig/node Categorizes an issue or PR as relevant to SIG Node. stage/beta Denotes an issue tracking an enhancement targeted for Beta status tracked/no Denotes an enhancement issue is NOT actively being tracked by the Release Team

Comments

@derekwaynecarr
Copy link
Member

derekwaynecarr commented Apr 25, 2017

Feature Description

  • One-line feature description (can be used as a release note): Provide a reason why users would choose to run Guaranteed pods over Burstable pods. Right now, Guaranteed pods have a penalty (due to cpu limit enforcement). Ideally, we can optimize how the kubelet manages Guaranteed pods to provide material benefits over Burstable pods, particularly with respect to latency sensitivity.
  • Primary contact (assignee): @sjenning @ConnorDoyle
  • Responsible SIGs: sig-node
  • Design proposal link (community repo): Add CPU manager proposal. community#654
  • Reviewer(s) - (for LGTM) recommend having 2+ reviewers (at least one from code-area OWNERS file) agreed to review. Reviewers from multiple companies preferred: @vishh @derekwaynecarr
  • Approver (likely from SIG/area to which feature belongs): @derekwaynecarr
  • Initial target stage (alpha/beta/stable) and release (x.y): Alpha 1.8, Beta 1.10
@dchen1107 dchen1107 added the sig/node Categorizes an issue or PR as relevant to SIG Node. label May 1, 2017
@dchen1107
Copy link
Member

@derekwaynecarr I didn't target this for 1.7 since it relies on our design.

@ConnorDoyle
Copy link
Contributor

/assign

@derekwaynecarr
Copy link
Member Author

@ConnorDoyle @dchen1107 - updated as follows:

@idvoretskyi idvoretskyi modified the milestones: 1.8, next-milestone Jul 25, 2017
@idvoretskyi idvoretskyi added kind/feature Categorizes issue or PR as related to a new feature. stage/alpha Denotes an issue tracking an enhancement targeted for Alpha status and removed help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. labels Jul 25, 2017
@ConnorDoyle
Copy link
Contributor

xref #375

@idvoretskyi
Copy link
Member

@ConnorDoyle @sjenning @kubernetes/sig-node-feature-requests any updates for 1.8? Is this feature still on track for the release?

@sjenning
Copy link
Contributor

sjenning commented Sep 5, 2017

@idvoretskyi yes, there were a series of PRs that were merged to enable CPU manager, which is a component of this issue

1/6 kubernetes/kubernetes#49186 (closed, broken out into following PRs)
2/6 kubernetes/kubernetes#51132
3/6 kubernetes/kubernetes#51140
4/6 kubernetes/kubernetes#51357
5/6 kubernetes/kubernetes#51180
6/6 kubernetes/kubernetes#51041 (still open, e2e tests)

@calebamiles
Copy link
Contributor

last PR looks like it's scheduled to merge. @sjenning, do you have a docs PR in flight?

@balajismaniam
Copy link

@calebamiles Some docs related to CPU manager have already merged (https://github.com/kubernetes/kubernetes.github.io/blob/release-1.8/docs/tasks/administer-cluster/cpu-management-policies.md). There are some PRs in flight to update them.

@calebamiles
Copy link
Contributor

Thanks for the update, @balajismaniam. @derekwaynecarr suggested that this feature issue was essentially a dupe of #375 but predates it. How would we feel about closing this issue in favor of #375?

cc: @kubernetes/sig-node-feature-requests, @kubernetes/kubernetes-release-managers

@ConnorDoyle
Copy link
Contributor

Depends on what else besides CPU manager @derekwaynecarr has in mind to differentiate the G class. If there's nothing specific planned for now then +1 to close it.

@resouer
Copy link

resouer commented Sep 27, 2017

@balajismaniam @derekwaynecarr One thing seems missing in doc and proposal is, for example when I set cpu: "2", what's the value should be in Pod level cgroup? We may need to skip container with cpu request >= 1 during ensuring pod qos.

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

Prevent issues from auto-closing with an /lifecycle frozen comment.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 6, 2018
@sjenning
Copy link
Contributor

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 10, 2018
@derekwaynecarr derekwaynecarr added the stage/beta Denotes an issue tracking an enhancement targeted for Beta status label Mar 27, 2018
@derekwaynecarr derekwaynecarr removed the stage/alpha Denotes an issue tracking an enhancement targeted for Alpha status label Mar 27, 2018
@derekwaynecarr
Copy link
Member Author

updated beta status for 1.10

@justaugustus
Copy link
Member

@ConnorDoyle @sjenning @derekwaynecarr @kubernetes/sig-node-feature-requests
Any plans for this in 1.11?

If so, can you please ensure the feature is up-to-date with the appropriate:

  • Description
  • Milestone
  • Assignee(s)
  • Labels:
    • stage/{alpha,beta,stable}
    • sig/*
    • kind/feature

cc @idvoretskyi

@justaugustus justaugustus removed this from the v1.8 milestone Jul 1, 2018
@justaugustus
Copy link
Member

This feature current has no milestone, so we'd like to check in and see if there are any plans for this in Kubernetes 1.12.

If so, please ensure that this issue is up-to-date with ALL of the following information:

  • One-line feature description (can be used as a release note):
  • Primary contact (assignee):
  • Responsible SIGs:
  • Design proposal link (community repo):
  • Link to e2e and/or unit tests:
  • Reviewer(s) - (for LGTM) recommend having 2+ reviewers (at least one from code-area OWNERS file) agreed to review. Reviewers from multiple companies preferred:
  • Approver (likely from SIG/area to which feature belongs):
  • Feature target (which target equals to which milestone):
    • Alpha release target (x.y)
    • Beta release target (x.y)
    • Stable release target (x.y)

Set the following:

  • Description
  • Assignee(s)
  • Labels:
    • stage/{alpha,beta,stable}
    • sig/*
    • kind/feature

Once this feature is appropriately updated, please explicitly ping @justaugustus, @kacole2, @robertsandoval, @rajendar38 to note that it is ready to be included in the Features Tracking Spreadsheet for Kubernetes 1.12.


Please note that Features Freeze is tomorrow, July 31st, after which any incomplete Feature issues will require an Exception request to be accepted into the milestone.

In addition, please be aware of the following relevant deadlines:

  • Docs deadline (open placeholder PRs): 8/21
  • Test case freeze: 8/28

Please make sure all PRs for features have relevant release notes included as well.

Happy shipping!

P.S. This was sent via automation

@justaugustus justaugustus added the tracked/no Denotes an enhancement issue is NOT actively being tracked by the Release Team label Aug 4, 2018
@kacole2
Copy link

kacole2 commented Oct 8, 2018

Hi
This enhancement has been tracked before, so we'd like to check in and see if there are any plans for this to graduate stages in Kubernetes 1.13. This release is targeted to be more ‘stable’ and will have an aggressive timeline. Please only include this enhancement if there is a high level of confidence it will meet the following deadlines:

  • Docs (open placeholder PRs): 11/8
  • Code Slush: 11/9
  • Code Freeze Begins: 11/15
  • Docs Complete and Reviewed: 11/27

Please take a moment to update the milestones on your original post for future tracking and ping @kacole2 if it needs to be included in the 1.13 Enhancements Tracking Sheet

Thanks!

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 6, 2019
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Feb 5, 2019
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/feature Categorizes issue or PR as related to a new feature. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. sig/node Categorizes an issue or PR as relevant to SIG Node. stage/beta Denotes an issue tracking an enhancement targeted for Beta status tracked/no Denotes an enhancement issue is NOT actively being tracked by the Release Team
Projects
None yet
Development

No branches or pull requests