-
Notifications
You must be signed in to change notification settings - Fork 204
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Emit event when pod eviction is blocked due to pod disruption budget #1599
Comments
This issue is currently awaiting triage. If Karpenter contributors determines this is a relevant issue, they will accept it by applying the The Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
This is also a big loss in the availability of my production application 😢 |
Can we disable this validation check conditionally based on some external NodePool configuration? We are hoping that if we can disable that somehow then later we can catch message here in form of some events when nodes are in hung state and rollout restart the deployments to unblock and and let the node get deleted? |
What if we proceed with a rollout restart policy if the current situation doesn't align with the PDB configuration for all workloads(Not only single replica)? The potential downside is that the pending pods from the restart might trigger the creation of new nodes, which could result in a never unstable environment. |
Out of curiosity, I forked this repo and then commented out this validation check, and deploy that customised image in my cluster. I found out that a node (with app with single replica as well as PDB) is having below set of events in its corresponding nodeclaim. We can clearly see that node deletion is blocked for PDB violation. I guess DisruptionTerminating is key event here. When I did rollout restart of the existing deployment having one replica and PDB, it just gracefully scheduled the pod in other app and after that node got deleted. So if we can get this request accepted then we can potentially look for DisruptionTerminating event from our custom controller and then just restart only those deploy/sts which are having one replica and PDB. For rest of the deploy/sts karpenter would automatically take care. Not a nice solution but effective one
|
/assign I will bring this issue to the community meeting and follow the next things. |
Hey, can you control this on the pod level by having maxUnavailable = 0 and maxSurge = 1 ? To make K8s create a new pod first before proceeding with removing the previous? |
@hdimitriou No that's not possible. k8s evict apis (which karpenter follows) does not respect maxUnavailable = 0 and maxSurge = 1. Before creating a new pod it would delete the existing pod |
@jwcesign, are there any updates on the community review of that? Current behaviour breaks zero downtime on the pod moving between nodes. Karpenter should wait until new pods are marked as healthy before destroying old ones. |
Hi, I have a similar situation where I have an application with a single replica and I can't perform rebalancing without downtime. Are there any hacks or workarounds? For example, using preStop or terminationGracePeriodSeconds so that Kubernetes runs a new pod in parallel (handled by the deployment) while the old one is still alive for the time needed to start the new one. I've also seen similar issues, but they mentioned custom controllers and handlers: |
We are also thinking of creating custom controller to handle it, but currently we need some event based on which we trigger the controller working and what this thread about. Once this will close we can also plan to opensource it. |
Description
Have zero downtime for applications having single replica during consolidation/drift of underlined node
Very important (Blocker for adopting karpenter)
Hi,
We have cluster with development environments and each pod is a single replica. During the consolidation, Karpenter is deleting this single pod so it transforms to Terminating status and new pod is in Init status. Of course it cause downtime as new connections cannot be routed to Terminating pod.
We'd like to have some option to control how pods are rescheduled during consolidation. I think that maybe after node cordon, ideally we want to do rollout restart of pods instead of draining nodes.
I could see one pull request regarding that which had been closed at the end of last year.
So can we do something regarding this? May be you can just emit a event from nodeclaim when it is about to be disrupted, we can catch the event from our own custom controller and do roleout restart of existing deployment and statefulsets which would reschedule the workloads in other nodes and then eventually karpenter would automatically taint and delete the existing node. We tried to follow this approach but what we seen that DisruptionBlocked event is being emitted continuously if app with 1 replica and PDB exist simultaneously irrespective of whether Node is disruptable or not. So we really can't run any logic based on DisruptionBlocked event and it's kind of a false alarm for us.
In a nutshell we need a event when actually node could not be disrupted (Not as a validation check like current DisruptionBlocked event ) because of presence of PDB. May be toggling the sequence of DisruptionBlocked and Unconsolidatable would help
Below is the sequence of existing events
The text was updated successfully, but these errors were encountered: