-
Notifications
You must be signed in to change notification settings - Fork 123
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ts-node-dev doesn't allow graceful shutdowns when managed by external processes #282
Comments
I have published my fork as a scoped package as a workaround in the meantime: |
I spent two hours struggling with this until I narrowed it down to ts-node-dev not waiting for the graceful shutdown. Thanks for your attempt at fixing this, @anthonyalayo. I tried your fork and can confirm it works for me as well. Any chance this can be fixed? |
@thodoo thanks for the issue validation! Honestly, I need some help from the maintainer who wrote the unit tests. I spent hours debugging them to get them working but no luck. |
any updates regarding this issue? |
any updates? |
Issue description
If
ts-node-dev
is running a process that has graceful shutdowns, those graceful shutdowns will not be honored in situations such as a kubernetes rollout restart, where another process abovets-node-dev
is managing the execution.Context
Some of the context is on this pull request that attempted to fix the issue: #269
Our current codebase does not wait for a graceful shutdown when managed by another external process.
Did you try to run with ts-node?
Yes, this works with
ts-node
. The fix was merged here: TypeStrong/ts-node#419Example
Here is an example. I created a simple express server that has the following graceful shutdown logic:
The server is being started via
ts-node-dev
inside a docker container managed by kubernetes.When
ts-node-dev
detects a change within the container, it successfully goes through the shutdown logic:You can see that the shutdown logic completes entirely and the server starts back up.
Now when attempting to delete a pod via kubectl to mimic a rollout:
The following is output:
While the PR I created fixed the issue when trying it out in practice, I am not able to get our test suite passing. Attempting to debug the test suite has been unfruitful, as there is extremely limited debuggability for either printing or breakpoints with these child processes.
If anyone can tackle it, that would be appreciated.
The text was updated successfully, but these errors were encountered: