-
Notifications
You must be signed in to change notification settings - Fork 8.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Distinguish wait-shutdown command from standard k8s SIGTERM #6287
Comments
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle stale |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-contributor-experience at kubernetes/community. |
/remove-lifecycle stale |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-contributor-experience at kubernetes/community. |
/remove-lifecycle stale |
If we add logging about the shutdown, would that be enough? |
@rittneje reminder |
@strongjz Yes, assuming those logs would be sent to the main container's stderr/stdout. |
/triage accepted |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/lifecycle active |
@iamNoah1: GuidelinesPlease ensure that the issue body includes answers to the following questions:
For more details on the requirements of such an issue, please see here and ensure that they are met. If this request no longer meets these requirements, the label can be removed In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/assign |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
I am also confused by preStopHook |
Hi 👋 Ok, maybe I'm completely wrong here so sorry in advance if it's the case. I was investigating a potential problem we have (a service receiving some 504 errors, apparently linked to ingress-controller downscaling event in another cluster), so I was looking at how nginx handled SIGTERM signals. And apparently it handles it as a SIGKILL and use SIGQUIT for graceful shutdown. At the end of 2020 nginx project change all their docker images to use SIGQUIT instead of SIGTERM as a stop signal (see commit). Is a graceful shutdown the target for nginx-controller? As intended by default by kubernetes? |
We had to disable the preStopHook to get proper shutdown behavior with long running GRPC traffic and have been running for months successfully. |
(Related to gracerful shutdown and not the signal processing, sorry for long post with updates) I guess the easiest way is to use ingress-nginx/pkg/flags/flags.go Line 208 in a581a7b
Other options:
we had that as shell for months with nginx only without any issues. So the wait-for-shutdown could just send Also, nginx supports graceful quit for over 3y now:
Also see #6928 |
@nvtkaszpir Please note that SIGTERM is only sent to nginx-ingress-controller (i.e., pid 1), not to nginx. It handles this signal by first sleeping for ingress-nginx/pkg/util/process/sigterm.go Lines 32 to 49 in 5b35651
ingress-nginx/internal/ingress/controller/nginx.go Lines 367 to 415 in 5b35651
In practice, 0 is a bad default value for |
you're right, sigterm goes to the controller, which then calls quit in nginx
|
I just tested
|
This issue has not been updated in over 1 year, and should be re-triaged. You can:
For more details on the triage process, see https://www.kubernetes.dev/docs/guide/issue-triage/ /remove-triage accepted |
/wait-shutdown preStop script's only job is to send SIGTERM to nginx-ingress-controller, which is PID 1, so it's the same with or without in Kubernetes environments. See kubernetes#6287 for discussion.
/wait-shutdown preStop script's only job is to send SIGTERM to nginx-ingress-controller, which is PID 1, so it's the same with or without in Kubernetes environments. See kubernetes#6287 for discussion.
As of v0.26.0, we can specify /wait-shutdown as a pre-stop hook in our deployment spec. As currently implemented, it sends SIGTERM to the nginx-ingress-controller process.
ingress-nginx/cmd/waitshutdown/main.go
Line 29 in 59a7f51
Since SIGTERM is also what k8s sends when shutting down a pod, there is no way for us to really tell for sure from the logs whether the pre-stop hook was properly configured or not. For this reason, it would be better if wait-shutdown used a different signal.
/kind feature
The text was updated successfully, but these errors were encountered: