You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Just to make it clear, this is just a stupid script to show the problem. The problem is critical for databases like mongodb and others that sometimes will not recover automatically from an ungraceful shutdown.
I get the same problem when using docker stop instead of k3d cluster stop to stop the container
So perhaps it's a problem k3s image that doesn't send SIGTERM to the pods before existing?
This implements what k8s recommends to do before shutting down a node for maintenance, which is similar to the situation of a computer shutdown or stopping of the container
When stopping a
k3d
cluster, the pods don't exit gracefully.This can cause stateful services such as
mongodb
and other databases to fail restart.To simplify, I'm describing the behavior with a simple script.
What did you do
How was the cluster created?
k3d cluster create test
What did you do afterwards?
kubectl apply -f test-pod.yaml
kubectl logs -f sleep-test
k3d cluster stop test
These are the relevant files:
Dockerfile
main.sh
test-pod.yaml
What did you expect to happen
I expected the behavior to be the same as when I'm exiting a docker container:
Instead I'm getting a crash:
Just to make it clear, this is just a stupid script to show the problem. The problem is critical for databases like
mongodb
and others that sometimes will not recover automatically from an ungraceful shutdown.Which OS & Architecture
Which version of
k3d
Which version of docker
The text was updated successfully, but these errors were encountered: