podman kube down kills pod immediately instead of performing a clean shutdown #19135
Labels
kind/bug
Categorizes issue or PR as related to a bug.
locked - please file new issue/PR
Assist humans wanting to comment on an old issue or PR with locked comments.
Issue Description
While playing around with Podman's Systemd generator (and quadlet) in a Fedora Server 38 VM I noticed that when running
systemctl stop <unit>
on the service generated from my.kube
file the unit enters the failed state due to the main process (the service container'sconmon
) exiting with code137
, which seems to suggest thatconmon
gotSIGKILL
'd. I then started playing with the barepodman kube play
/podman kube stop
commands (without the generator and/or systemd in the mix) and noticed that when running a service that takes some time to shut down after receiving the stop signal (in my tests I used themarctv/minecraft-papermc-server:latest
Minecraft server image from Docker Hub) I get two different behaviors depending on which command I use to stop the service:podman pod stop testpod
: pod takes some time to quit (around 4s on my machine), looking atpodman pod logs -f
in the meantime shows some container messages related to the server quitting (saving chunk data to disk et al)podman kube down test.yml
: pod quits instantly (the java process too: verified withwatch -n 0.1 "ps aux | grep java"
) and nothing is printed to neither the pod logs (which quits instantly as well) nor the system journalI went on to replicate the issue on my Arch Linux main machine (both environments used podman
4.5.1
) and sure enough the same behavior could be observedBefore opening this issue I tried to remove my custom
containers.conf
as well as creating a new one that just sets the default container stop timeout to some high value (tried both600
and6000
seconds). I also triedpodman system prune
andpodman system reset
to no avail. All tests have been run with SELinux in permissive mode (or no selinux at all for Arch) on an otherwise minimally configured system.I tried to craft a minimal example that triggers the issue on my end, here it is:
Steps to reproduce the issue
Steps to reproduce the issue
podman kube play <filename>
podman pod logs -f
and/or awatch -n 0.1
that includes the container process clearly visiblepodman pod stop <name>
podman pod start <name>
and await initializationpodman kube down <filename>
Describe the results you received
Pod gets terminated uncleanly despite the stop timeout being configured high
Describe the results you expected
Either one of two outcomes:
podman pod stop
podman kube down
in case this behavior of the command is intentional (if this is the case - I was not able to positively determine this from the man page - maybe a--soft
option tokube down
may be implemented and used by the generator?)podman info output
Podman in a container
No
Privileged Or Rootless
Privileged
Upstream Latest Release
Yes
Additional environment details
Default settings QEMU virtual machine with a single NAT virtual network interface run in privileged session
Additional information
None
The text was updated successfully, but these errors were encountered: