Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add KEP skeleton & initial proposal for in-place update of Pod resources #2908

Closed
wants to merge 1 commit into from

Conversation

kgolab
Copy link

@kgolab kgolab commented Nov 6, 2018

This proposal aims at allowing Pod resource requests & limits to be updated in-place, without a need to restart the Pod or its Containers.

This commit is aimed at starting KEP process.

@k8s-ci-robot k8s-ci-robot added size/L Denotes a PR that changes 100-499 lines, ignoring generated files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. labels Nov 6, 2018
@k8s-ci-robot
Copy link
Contributor

Hi @kgolab. Thanks for your PR.

I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

I understand the commands that are listed here.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot added kind/kep sig/architecture Categorizes an issue or PR as relevant to SIG Architecture. labels Nov 6, 2018
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by:
To fully approve this pull request, please assign additional approvers.
We suggest the following additional approver: calebamiles

If they are not already assigned, you can assign the PR to them by writing /assign @calebamiles in a comment when ready.

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@thockin
Copy link
Member

thockin commented Nov 6, 2018

Can you detail here the relationship with #1719 which has a lot of feedback already? The doc says it builds upon it, but does that mean people need to digest that one first? Or that this replaces that one?

@kgolab
Copy link
Author

kgolab commented Nov 7, 2018

Regarding #1719 - this KEP partially supersedes it.

The goal of this KEP is to provide building blocks for Controllers wanting to use in-place resource update but not concentrate on any specific Controller.

@jdumars
Copy link
Member

jdumars commented Nov 15, 2018

/ok-to-test

@k8s-ci-robot k8s-ci-robot added ok-to-test Indicates a non-member PR verified by an org member that is safe to test. and removed needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. labels Nov 15, 2018
@davidopp
Copy link
Member

@bsalamat

@bsalamat
Copy link
Member

/sig scheduling

@k8s-ci-robot k8s-ci-robot added the sig/scheduling Categorizes an issue or PR as relevant to SIG Scheduling. label Nov 16, 2018
@bsalamat
Copy link
Member

/assign

@justaugustus
Copy link
Member

REMINDER: KEPs are moving to k/enhancements on November 30. Please attempt to merge this KEP before then to signal consensus.
For more details on this change, review this thread.

Any questions regarding this move should be directed to that thread and not asked on GitHub.

@justaugustus
Copy link
Member

KEPs have moved to k/enhancements.
This PR will be closed and any additional changes to this KEP should be submitted to k/enhancements.
For more details on this change, review this thread.

Any questions regarding this move should be directed to that thread and not asked on GitHub.
/close

@k8s-ci-robot
Copy link
Contributor

@justaugustus: Closed this PR.

In response to this:

KEPs have moved to k/enhancements.
This PR will be closed and any additional changes to this KEP should be submitted to k/enhancements.
For more details on this change, review this thread.

Any questions regarding this move should be directed to that thread and not asked on GitHub.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

but might be possible if some conditions change,
* Rejected - resource update was rejected by any of the components involved.

To provide some fine-grained control to the user,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please review our early thoughts below:

Policy controls: After going through pull 1719, we feel it may be a good idea to having two distinct levels of policy control:

  1. Pod level resize policy (from Slawomir’s feedback in our document) where scheduler determines resize action at pod level. Proposed policy values:
    1. InPlacePreferred (default) - Resize the pod on current node if possible, reschedule if not.
    2. InPlaceOnly - Resize the pod on current node, fail the request if on possible.
    3. Reschedule - Always reschedule the pod (For potential use by VPA ‘Recreate’ mode)
  2. Container level restart policy discussed in pull 1719 - cpu/memory restart/live-resize. To make it simpler, it may be sufficient to have ‘LiveResize’ and ‘Restart’ (default) policy options for cpu/memory that dictates whether a particular container will be restarted or resource-updated depending on the policy + resource-type affected. If UpdateContainerResources CRI API fails, restart container as a fallback. This should cover cases of jvm / legacy apps that cannot handle UpdateContainerResources. This policy is orthogonal to pod level resize policy above.

KEP design: I need to go over this more thoroughly. At first, we see the following flow:

  1. Resources update to a controller’s Template.PodSpec will be propagated into PodSpec Resource updates for its running pod instances by the controller, and it sets PodStatus.Conditions[]type=PodResizeResources to ResizeRequested.
  2. Scheduler will use sum(PodSpec.Containers[].Resources) to perform pod resources accounting in updatePod (removePod / addPod) and decide the action. If it fits on current node, it sets PodStatus.Contitions[type=PodResizeResources] to ActionUpdate.
  3. Kubelet acts on the ActionUpdate, and applies the declarative values in UpdateContainerResources (or restart container per policy). Kubelet sets PodStatus.ContainerStatuses[].ResourcesAllocated to the declarative value, and sets PodResizeResources condition to Complete/Done, or Failed on any errors.

Handling multiple scheduler race-condition: Kubelet reruns pod admission predicates during HandlePodUpdates (perhaps just running PodFitsResources might suffice). If fit == false, kubelet reschedules the pod if pod resize policy == InPlacePreferred, and fails the operation if InPlaceOnly.

Handling failure with roll-back & retry rather than letting user handle failures: Resizing may fail at scheduler due to pod disruption budget or insufficient node resources gated by policy, or at kubelet due to multiple scheduler race condition with InPlaceOnly pod resize policy. We feel it may be worth doing a smart retry as default mode of operation on resize failure at pod level. On failure, to controller queues the failed pod for resize retry. The retries are triggered by events such as pods leaving a node (InPlaceOnly - node insufficient resources case) and a PDBUpdate (PDB violation failure case). The retry approach seems to fit with the k8s paradigm.

Handling resize requests when a resize operation is pending: queue requests and apply the discrete requests from queue one-by-one upon completion of inflight operation (success or failure)

Please let me know how this sounds, very likely I’m missing some details.

Copy link
Member

@vinaykul vinaykul Dec 13, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@bsalamat
Copy link
Member

bsalamat commented Jan 4, 2019

@kgolab Do you plan to move this KEP to the new repo? Also, as mentioned above, there are several parallel efforts/proposals trying to address the same problem. It would be best if you folks could join efforts and merge the ideas into a single KEP.

@resouer
Copy link
Contributor

resouer commented Jan 8, 2019

@bsalamat Yes, for instance we do have a patch of in-place update for latest Kubernetes. I will try to ping @kgolab to see if a join efforts KEP is possible.

@kgolab
Copy link
Author

kgolab commented Jan 10, 2019

@bsalamat, @resouer, @vinaykul - yes I do plan to pick up this topic again, most likely early next week.
Sorry it's taking so long :(

@vinaykul
Copy link
Member

@bsalamat, @resouer, @vinaykul - yes I do plan to pick up this topic again, most likely early next week.
Sorry it's taking so long :(

@kgolab I joined the SIG-node weekly meeting this Tuesday and brought this topic up. I mentioned, based on my chat with @bskiba and @DirectXMan12 during last KubeCon, that we were looking for sig-node to review and comment on the proposal. @bsalamat and I talked about this as well at KubeCon, and he has been very helpful and supportive of this effort.

@dchen1107 seemed open to it, and suggested that we move the draft KEP into the new process and create a starting point for them.

If you don't mind, I can move your draft-KEP over to k/enhancements/kep/sig-autoscaling. I have blocked out some time tomorrow to closely review the new KEP process and get this moved over, and send a PR.

Or would you prefer to drive it next week?

@resouer
Copy link
Contributor

resouer commented Jan 10, 2019

@vinaykul I think Karol is planning to start the new KEP considering all his efforts in this old one.

At the same time, not sure if it's possible for you and other folks interested in this feature draft a simple requirements/use cases/design doc from your own perspective?

@vinaykul
Copy link
Member

@vinaykul I think Karol is planning to start the new KEP considering all his efforts in this old one.

At the same time, not sure if it's possible for you and other folks interested in this feature draft a simple requirements/use cases/design doc from your own perspective?

@resouer The above merged draft KEP builds on our design and the IBM proposal. Please review the following thread: https://groups.google.com/forum/#!topic/kubernetes-sig-scheduling/UnIhGOKpohI

The design doc linked in that thread documents our requirements and use case, and there are links to the implementation to try out as well. It would be great if you could review it and provide any feedback to help evolve the merged design.

@resouer
Copy link
Contributor

resouer commented Jan 11, 2019

@vinaykul That would be great! I am drafting a requirement doc from our side to help with the join effort KEP as well. Will share it next week.

@kgolab
Copy link
Author

kgolab commented Jan 11, 2019

@vinaykul, if you have time to move the draft to new KEP process, that would be great. If not I'll pick it up as written earlier.

@resouer, please let us know once you have your requirements ready, it would be good to include them in the merged KEP so it's really a joint one.

@vinaykul
Copy link
Member

@vinaykul, if you have time to move the draft to new KEP process, that would be great. If not I'll pick it up as written earlier.

@resouer, please let us know once you have your requirements ready, it would be good to include them in the merged KEP so it's really a joint one.

@kgolab I went through the KEP template & documentation in k/enhancements, and no changes were needed to your initial KEP besides changing the owning-sig to sig-autoscaling, and adding initial set of reviewers from sig-scheduling and sig-node where I think bulk of the code changes will occur. I'll fix it if sig-autoscaling is not the right owner. To get the ball rolling, I plan to join the weekly sig-node meeting next Tuesday and bring this up for review.

@resouer Please share your requirements and implementation as it can help evolve this to address all known use cases.

@resouer
Copy link
Contributor

resouer commented Mar 12, 2019

@vinaykul The requirements from my side has been included by our team member's comment here: kubernetes/enhancements#686 (comment)

@vinaykul
Copy link
Member

vinaykul commented Mar 12, 2019

@vinaykul The requirements from my side has been included by our team member's comment here: kubernetes/enhancements#686 (comment)

@resouer Yes I kept those requirements in mind while updating the KEP flow control. We can have policy guide whether initiating actor (e.g. job/deploy controller) will take no action (default), retry, reschedule - I'm yet to add this part to the KEP. Resizing ephemeral storage (while I've not scoped for it in the KEP) should be a doable extension, and currently defined policies may be valid choices for it as well.

As for not changing PodSpec, as discussed in that comment, it is a reasonable change. Annotations are a security concern as they can allow user to mess-up the system.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. ok-to-test Indicates a non-member PR verified by an org member that is safe to test. sig/architecture Categorizes an issue or PR as relevant to SIG Architecture. sig/scheduling Categorizes an issue or PR as relevant to SIG Scheduling. size/L Denotes a PR that changes 100-499 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

9 participants