Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

In-Place Update of Pod Resources #1287

Open
28 of 31 tasks
vinaykul opened this issue Oct 8, 2019 · 224 comments
Open
28 of 31 tasks

In-Place Update of Pod Resources #1287

vinaykul opened this issue Oct 8, 2019 · 224 comments
Assignees
Labels
kind/api-change Categorizes issue or PR as related to adding, removing, or otherwise changing an API lead-opted-in Denotes that an issue has been opted in to a release sig/autoscaling Categorizes an issue or PR as relevant to SIG Autoscaling. sig/node Categorizes an issue or PR as relevant to SIG Node. sig/scheduling Categorizes an issue or PR as relevant to SIG Scheduling. stage/beta Denotes an issue tracking an enhancement targeted for Beta status tracked/no Denotes an enhancement issue is NOT actively being tracked by the Release Team

Comments

@vinaykul
Copy link
Member

vinaykul commented Oct 8, 2019

Enhancement Description

Please to keep this description up to date. This will help the Enhancement Team track efficiently the evolution of the enhancement

  1. Identify CRI changes needed for UpdateContainerResources API, define response message for UpdateContainerResources

    • Extend UpdateContainerResources API to return info such as ‘not supported’, ‘not enough memory’, ‘successful’, ‘pending page evictions’ etc.
    • Define expected behavior for runtime when UpdateContainerResources is invoked. Define timeout duration of the CRI call.
      • Resolution: Separate KEP for CRI changes.
        • Discussed draft CRI changes with SIG-Node on Oct 22, and we agreed to do this as an incremental change outside the scope of this KEP, in a new mini-KEP. It does not block implementation of this KEP.
  2. Define behavior when multiple containers are being resized, and UpdateContainerResources fails for one or more containers.

    • One Possible solution:
      • Do not update Status.Resources.Limits if UpdateContainerResources API fails, and keep retrying until it succeeds.
  3. Check with API reviewers if we can keep maps instead list of named sub-objects for ResizePolicy.

    • After discussion with @liggitt , we are going to use list of named subobjects for extensibility.
  4. Can we find a more intuitive name for ResizePolicy?

  5. Can we use ResourceVersion to figure out the ordering of Pod resize requests?

  6. Do we need to add back the ‘RestartPod’ resize policy? Is there a strong use-case for it?

    • Resolution: No.
      • Discussed with SIG-Node on Oct 15th, not adding RestartPod policy for simplicity, will revisit if we encounter problems.

Alpha Feature Code Issues:
These are Items and issues discovered during code review that need further discussion and need to be addressed before Beta.

  1. Can we figure out GetPodQOS differently once it is determined on pod create? See In-place Pod Vertical Scaling feature kubernetes#102884 (comment)
  2. How do we deal with a pod that requests 1m/1m cpu requests/limits. See In-place Pod Vertical Scaling feature kubernetes#102884 (comment)
  3. Add internal representation of ContainerStatus.Resources in kubeContainer. Convert it to ContainerStatus.Resources in kubelet_pods generate functions. See In-place Pod Vertical Scaling feature kubernetes#102884 (comment) and In-place Pod Vertical Scaling feature kubernetes#102884 (comment) and In-place Pod Vertical Scaling feature kubernetes#102884 (comment)
  4. Can we get rid of resize mutex? Is there a better way to handle resize retries? See In-place Pod Vertical Scaling feature kubernetes#102884 (comment)
  5. Can we recover from resize checkpoint store failures? See In-place Pod Vertical Scaling feature kubernetes#102884 (comment)
  6. CRI clarification for ContainerStatus.Resources and how to handle runtimes that don't support it. See In-place Pod Vertical Scaling feature kubernetes#102884 (comment)
  7. Add real values to dockershim test for ContainerStatus.Resources In-place Pod Vertical Scaling feature kubernetes#102884 (comment)
    • Resolution: Not required due to dockershim deprecation.
  8. Change PodStatus.Resources from v1.ResourceRequirements to *v1.ResourceRequirements
    • Resolution: Fixed
  9. Address all places in the code that has 'TODO(vinaykul)'
  10. Current implementation does not work with node toploogy manager enabled. This limitation is not capturedi in the KEP. Add this to the release documentation for alpha, we will address this in beta. See In-place Pod Vertical Scaling feature kubernetes#102884 (comment)
@k8s-ci-robot k8s-ci-robot added the needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. label Oct 8, 2019
@vinaykul
Copy link
Member Author

vinaykul commented Oct 8, 2019

/assign @vinaykul

@jeremyrickard
Copy link
Contributor

jeremyrickard commented Oct 9, 2019

👋 Hey there @vinaykul. I'm a shadow on the 1.17 Release Team, working on Enhancements. We're tracking issues for the 1.17 release and I wanted to reach out and ask we should track this (or more specifically I guess the In-Place Update of Pod Resources feature) for 1.17?

The current release schedule is:

Monday, September 23 - Release Cycle Begins
Tuesday, October 15, EOD PST - Enhancements Freeze
Thursday, November 14, EOD PST - Code Freeze
Tuesday, November 22 - Docs must be completed and reviewed
Monday, December 9 - Kubernetes 1.17.0 Released

We're only 5 days away from the Enhancements Freeze, so if you intend to graduate this capability in the 1.17 release, here are the requirements that you'll need to satisfy:

  • KEP must be merged in implementable state
  • KEP must define graduation criteria
  • KEP must have a test plan defined

Thanks @vinaykul

@vinaykul
Copy link
Member Author

  • KEP must be merged in implementable state
  • KEP must define graduation criteria
  • KEP must have a test plan defined

Hi @jeremyrickard I'll do my best to get this KEP to implementable state by next Tuesday, but it looks like a stretch at this point - the major item is to complete API review with @thockin , and that depends on his availability.

The actual code changes are not that big. Nevertheless, the safe option would be to track this for 1.18.0 release, I'll update you by next Monday.

CC: @dashpole @derekwaynecarr @dchen1107

@mrbobbytables mrbobbytables added sig/autoscaling Categorizes an issue or PR as relevant to SIG Autoscaling. sig/node Categorizes an issue or PR as relevant to SIG Node. labels Oct 14, 2019
@k8s-ci-robot k8s-ci-robot removed needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. labels Oct 14, 2019
@mrbobbytables mrbobbytables added tracked/no Denotes an enhancement issue is NOT actively being tracked by the Release Team tracked/yes Denotes an enhancement issue is actively being tracked by the Release Team and removed tracked/no Denotes an enhancement issue is NOT actively being tracked by the Release Team labels Oct 14, 2019
@mrbobbytables mrbobbytables added this to the v1.17 milestone Oct 14, 2019
@vinaykul
Copy link
Member Author

@jeremyrickard @mrbobbytables This KEP will take some more discussion - key thing is API review. It does not look like @thockin or another API reviewer is available soon. Could we please track this KEP for v1.18?
Thanks,

@jeremyrickard
Copy link
Contributor

/milestone v1.18

@k8s-ci-robot k8s-ci-robot modified the milestones: v1.17, v1.18 Oct 14, 2019
@jeremyrickard jeremyrickard added tracked/no Denotes an enhancement issue is NOT actively being tracked by the Release Team and removed tracked/yes Denotes an enhancement issue is actively being tracked by the Release Team labels Oct 14, 2019
@vinaykul
Copy link
Member Author

@PatrickLang Here's a first stab at the proposed CRI change to allow UpdateContainerResources to work with Windows. Please take a look.. let's discuss in tomorrow's sig meeting

root@skibum:~/km16/staging/src/k8s.io/cri-api# git diff --cached .
diff --git a/staging/src/k8s.io/cri-api/pkg/apis/runtime/v1alpha2/api.proto b/staging/src/k8s.io/cri-api/pkg/apis/runtime/v1alpha2/api.proto
index 0290d0f..b05bb56 100644
--- a/staging/src/k8s.io/cri-api/pkg/apis/runtime/v1alpha2/api.proto
+++ b/staging/src/k8s.io/cri-api/pkg/apis/runtime/v1alpha2/api.proto
@@ -924,14 +924,33 @@ message ContainerStatusResponse {
     map<string, string> info = 2;
 }
 
+// ContainerResources holds the fields representing a container's resource limits
+message ContainerResources {
+    // Resource configuration specific to Linux container.
+    LinuxContainerResources linux = 1;
+    // Resource configuration specific to Windows container.
+    WindowsContainerResources windows = 2;
+}
+
 message UpdateContainerResourcesRequest {
     // ID of the container to update.
     string container_id = 1;
-    // Resource configuration specific to Linux containers.
+    // Resource configuration specific to Linux container.
     LinuxContainerResources linux = 2;
+    // Resource configuration specific to Windows container.
+    WindowsContainerResources windows = 3;
 }
 
-message UpdateContainerResourcesResponse {}
+message UpdateContainerResourcesResponse {
+    // ID of the container that was updated.
+    string container_id = 1;
+    // Resource configuration currently applied to the Linux container.
+    LinuxContainerResources linux = 2;
+    // Resource configuration currently applied to the Windows container.
+    WindowsContainerResources windows = 3;
+    // Error message if UpdateContainerResources fails in the runtime.
+    string error_message = 4;
+}
 
 message ExecSyncRequest {
     // ID of the container.
diff --git a/staging/src/k8s.io/cri-api/pkg/apis/services.go b/staging/src/k8s.io/cri-api/pkg/apis/services.go
index 9a22ecb..9f1d893 100644
--- a/staging/src/k8s.io/cri-api/pkg/apis/services.go
+++ b/staging/src/k8s.io/cri-api/pkg/apis/services.go
@@ -44,7 +44,7 @@ type ContainerManager interface {
        // ContainerStatus returns the status of the container.
        ContainerStatus(containerID string) (*runtimeapi.ContainerStatus, error)
        // UpdateContainerResources updates the cgroup resources for the container.
-       UpdateContainerResources(containerID string, resources *runtimeapi.LinuxContainerResources) error
+       UpdateContainerResources(containerID string, resources *runtimeapi.ContainerResources) error
        // ExecSync executes a command in the container, and returns the stdout output.
        // If command exits with a non-zero exit code, an error is returned.
        ExecSync(containerID string, cmd []string, timeout time.Duration) (stdout []byte, stderr []byte, err error)

@dashpole
Copy link
Contributor

dashpole commented Oct 24, 2019

@vinaykul It looks like since the above PR was merged, this was removed from the API review queue. I believe you need to open a new PR that moves the state to implementable, and then add the API-review label to get it back in the queue and get a reviewer.

Edit: you should also include any other changes (e.g. windows CRI changes) required to move the feature to implementable in the PR as well.

@vinaykul
Copy link
Member Author

@vinaykul It looks like since the above PR was merged, this was removed from the API review queue. I believe you need to open a new PR that moves the state to implementable, and then add the API-review label to get it back in the queue and get a reviewer.

Edit: you should also include any other changes (e.g. windows CRI changes) required to move the feature to implementable in the PR as well.

@dashpole Thanks!

I've started a provisional mini-KEP per our discussion last week for the CRI changes (Dawn mentioned last week that we should take that up separately). imho the CRI changes does not block the implementation of this KEP, as it is between Kubelet and runtime, and user is not affected by it.

In a second commit to the same PR, I've addressed another key issue (update api failure handling), and requested change to move primary KEP to implementable.

With this, everything is in one place, and we can use it for API review.

@palnabarun
Copy link
Member

palnabarun commented Jan 13, 2020

Hey there @vinaykul -- 1.18 Enhancements shadow here. I wanted to check in and see if you think this Enhancement will be graduating to alpha in 1.18?

The current release schedule is:

  • Monday, January 6th - Release Cycle Begins
  • Tuesday, January 28th EOD PST - Enhancements Freeze
  • Thursday, March 5th, EOD PST - Code Freeze
  • Monday, March 16th - Docs must be completed and reviewed
  • Tuesday, March 24th - Kubernetes 1.18.0 Released

To be included in the release,

  1. The KEP PR must be merged
  2. The KEP must be in an implementable state
  3. The KEP must have test plans and graduation criteria.

If you would like to include this enhancement, once coding begins please list all relevant k/k PRs in this issue so they can be tracked properly. 👍

We'll be tracking enhancements here: http://bit.ly/k8s-1-18-enhancements

Thanks! :)

@vinaykul
Copy link
Member Author

Hey there @vinaykul -- 1.18 Enhancements shadow here. I wanted to check in and see if you think this Enhancement will be graduating to alpha in 1.18?

The current release schedule is:

  • Monday, January 6th - Release Cycle Begins
  • Tuesday, January 28th EOD PST - Enhancements Freeze
  • Thursday, March 5th, EOD PST - Code Freeze
  • Monday, March 16th - Docs must be completed and reviewed
  • Tuesday, March 24th - Kubernetes 1.18.0 Released

To be included in the release,

  1. The KEP PR must be merged
  2. The KEP must be in an implementable state
  3. The KEP must have test plans and graduation criteria.

If you would like to include this enhancement, once coding begins please list all relevant k/k PRs in this issue so they can be tracked properly. 👍

We'll be tracking enhancements here: http://bit.ly/k8s-1-18-enhancements

Thanks! :)

@palnabarun Yes, I'm planning to work towards alpha code targets for this feature in 1.18. I've updated the KEP adding test plan and graduation criteria sections that I will be reviewing with SIG-Node this week and hope to get it implementable before Jan 28. I'll update this thread if anything changes.

@palnabarun
Copy link
Member

Thank you @vinaykul for the updates. :)

@palnabarun
Copy link
Member

/stage alpha

@k8s-ci-robot k8s-ci-robot added the stage/alpha Denotes an issue tracking an enhancement targeted for Alpha status label Jan 14, 2020
@palnabarun
Copy link
Member

/milestone v1.18

@palnabarun palnabarun removed the tracked/no Denotes an enhancement issue is NOT actively being tracked by the Release Team label Jan 14, 2020
@tallclair
Copy link
Member

Blog placeholder: kubernetes/website#48576

@tjons
Copy link
Contributor

tjons commented Nov 4, 2024

Hey again @tallclair 👋 v1.32 Enhancements team here,

Just checking in as we approach code freeze at 02:00 UTC Friday 8th November 2024 / 19:00 PDT Thursday 7th November 2024 .

Here's where this enhancement currently stands:

  • All PRs to the Kubernetes repo that are related to your enhancement are linked in the above issue description (for tracking purposes).
  • All PR/s are ready to be merged (they have approved and lgtm labels applied) by the code freeze deadline. This includes tests.

For this enhancement, it looks like the following PRs are open and need to be merged before code freeze (and we need to update the Issue description to include all the related PRs of this KEP):

Additionally, please let me know if there are any other PRs in k/k not listed in the description or not linked with this GitHub issue that we should track for this KEP, so that we can maintain accurate status.

The status of this enhancement is marked as at risk for code freeze.

If you anticipate missing code freeze, you can file an exception request in advance. Thank you!

@tjons tjons moved this from Tracked for enhancements freeze to At risk for code freeze in 1.32 Enhancements Tracking Nov 4, 2024
@tallclair
Copy link
Member

Thanks @tjons. Yes, I agree this is at risk for code freeze. We have a separate tracking board for this feature here: https://github.com/orgs/kubernetes/projects/178/views/2, and there are quite a few more PRs that need to merge by Thursday. I will continue to push on these, but there's a good chance we miss the deadline. With so many PRs, it will be easier for me to add them after the fact to the PR description.

@tjons
Copy link
Contributor

tjons commented Nov 8, 2024

Hello @tallclair 👋 Enhancements team here,

Unfortunately, the implementation (code related) PR(s) associated with this enhancement is not in the merge-ready state by code-freeze and hence this enhancement is now removed from the 1.32 milestone.

If you still wish to progress this enhancement in 1.32, please file an exception request as soon as possible, within three days. If you have any questions, you can reach out in the #release-enhancements channel on Slack and we'll be happy to help. Thanks!

/milestone clear

@k8s-ci-robot k8s-ci-robot removed this from the v1.32 milestone Nov 8, 2024
@tjons tjons moved this from At risk for code freeze to Removed from Milestone in 1.32 Enhancements Tracking Nov 8, 2024
@sreeram-venkitesh
Copy link
Member

Exception has been filed.

@fsmunoz
Copy link

fsmunoz commented Nov 11, 2024

The v1.32 Release Team is APPROVING this Code Freeze exception request. The updated deadline is 19:00 PDT Tuesday, 12th November 2024.
cc @tjons

/milestone v1.32

@k8s-ci-robot k8s-ci-robot added this to the v1.32 milestone Nov 11, 2024
@fsmunoz fsmunoz moved this from Removed from Milestone to Exception Required in 1.32 Enhancements Tracking Nov 11, 2024
@tjons
Copy link
Contributor

tjons commented Nov 20, 2024

@tallclair did the work get completed here on time? I was at Kubecon, so if you could point me in the direction yes/no that would be great.

@liggitt
Copy link
Member

liggitt commented Nov 20, 2024

beta promotion did land in kubernetes/kubernetes#128682, but was reverted in kubernetes/kubernetes#128875 due to multiple issues

@liggitt
Copy link
Member

liggitt commented Nov 20, 2024

So for 1.32, the state is that lots of work landed, but the feature remained in alpha

@fsmunoz
Copy link

fsmunoz commented Nov 20, 2024

cc @tjons , we need to update the tracking.

@tjons
Copy link
Contributor

tjons commented Nov 20, 2024

Thanks for the update! Will remove this from the milestone accordingly

/milestone clear

@haircommander
Copy link
Contributor

/reopen

A github workflow autoclosed when I set to done 🙃 sorry for the noise

@k8s-ci-robot k8s-ci-robot reopened this Dec 4, 2024
@k8s-ci-robot
Copy link
Contributor

@haircommander: Reopened this issue.

In response to this:

/reopen

A github workflow autoclosed when I set to done 🙃 sorry for the noise

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/api-change Categorizes issue or PR as related to adding, removing, or otherwise changing an API lead-opted-in Denotes that an issue has been opted in to a release sig/autoscaling Categorizes an issue or PR as relevant to SIG Autoscaling. sig/node Categorizes an issue or PR as relevant to SIG Node. sig/scheduling Categorizes an issue or PR as relevant to SIG Scheduling. stage/beta Denotes an issue tracking an enhancement targeted for Beta status tracked/no Denotes an enhancement issue is NOT actively being tracked by the Release Team
Projects
Status: Net New
Status: Tracked
Status: Removed from Milestone
Status: Tracked for Code Freeze
Status: Exception Required
Status: Done
Status: In Progress
Status: Deferred
Development

No branches or pull requests