-
-
Notifications
You must be signed in to change notification settings - Fork 466
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
change: graceful shutdown drains node before k3d container stops #1119
Conversation
This commit should be reverted once k3d-io/k3d#1119 is merged
This commit should be reverted once k3d-io/k3d#1119 is merged
This commit should be reverted once k3d-io/k3d#1119 is merged
Hi @arikmaor , thanks for your PR! E.g. one could introduce a flag What do you think? |
I don't think so, I'm assuming you're calling
In the
IMHO, graceful shutdown should be the default, but I guess that's debatable.
|
This commit should be reverted once k3d-io/k3d#1119 is merged
Your arguments make total sense, thanks for taking the time. |
10x cool :) |
This commit should be reverted once k3d-io/k3d#1119 is merged
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Thank you @arikmaor ! 👍 |
@all-contributors |
I've put up a pull request to add @arikmaor! 🎉 |
@iwilltry42 @arikmaor agent does not have a kubeconfig, that make error in agent: |
Can happen from all kinds of reasons... |
That PR used |
Reproduce:
|
@iwilltry42 perhaps the script should be tweaked a bit for some use case I did not for-see |
@arikmaor I think we may even need to revert the change for now.
I'm not sure how "graceful" we could make the shutdown by e.g. using k3d's stop commands could easily be handled from the client side.. but not docker commands unfortunately. Do you have any idea? |
Let's not give up just yet. If you just run shutting down the machine is critical case IMO, you don't expect your containers to just drop dead without a chance of saving their data when you do a clean shutdown. |
@iwilltry42 is there a way of knowing inside |
@arikmaor yeah we would know that, but that doesn't help with agent shutdown, right? |
mmm... I see what you mean (before I didn't understand that an agent is still running pods, just not replicating the control plane) |
Alright, so we two major cases here k3d-induced shutdown of nodesEverything's fine here, we can just run the drain command for all cluster node from a single server node. Docker- or System-induced shutdown of nodesWe don't know in which order the nodes are being shut down, and if it will be possible to drain the node before the container is killed. Do you know of any way to drain nodes without access to the control-plane? |
Draining is the recommended way I think it someone use case was broken by this PR, we should revert it. |
@arikmaor @iwilltry42 any update? This causes the docker log to grow very quickly and takes up disk space. |
Moved #205 to v5.7.0 - with this, we can add the option for custom entrypoints and other lifecycle hooks that can execute scripts. |
Ref #1420 |
This PR drains the node before stopping the container (incase of
docker stop
/k3d cluster stop
/k3d node stop
/system shutdown)It allows services to gracefully shutdown in this case, this is specially useful for databases that needs to clear their lock files.
Related to the bug I opened
Implications
sh
is now the main process running in the image and is used to intercept incoming signals (SIGTERM etc.) more infok3s
process,kubectl drain
is called, causing the pods to evict. When to node is ready again,kubectl uncordon
is called to allow pods to reschedule more info