Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Workaround for shutdown error #722

Merged
merged 1 commit into from
Sep 26, 2024
Merged

Workaround for shutdown error #722

merged 1 commit into from
Sep 26, 2024

Conversation

melix
Copy link
Collaborator

@melix melix commented Sep 25, 2024

This commit is a workaround for an error which seems to happen since the upgrade to Micronaut 4.6. In a nutshell, when the /stop endpoint is called, the thread which was started to monitor if the service is properly shutdown after a timeout was started and the application context was closed, before the stop method would return something to the client. As a consequence, there was an error message saying that the application context wasn't open and that a bean (the message writers) weren't found. This was not quite true, since the application context used to be open but wasn't.

The workaround, which isn't great, is to use the task scheduler to delay the shutdown by a few hundreds of milliseconds. This gives the opportunity to send the response back to the client before the application context is shutdown.

Note that in any case, this wasn't a big issue, since the service would be shutdown anyway, but the error message for the user wasn't great.

This commit is a workaround for an error which seems to happen since
the upgrade to Micronaut 4.6. In a nutshell, when the `/stop` endpoint
is called, the thread which was started to monitor if the service is
properly shutdown after a timeout was started and the application
context was closed, _before_ the `stop` method would return something
to the client. As a consequence, there was an error message saying
that the application context wasn't open and that a bean (the message
writers) weren't found. This was not quite true, since the application
context _used to be_ open but wasn't.

The workaround, which isn't great, is to use the task scheduler to
delay the shutdown by a few hundreds of milliseconds. This gives the
opportunity to send the response back to the client _before_ the
application context is shutdown.

Note that in any case, this wasn't a big issue, since the service
would be shutdown anyway, but the error message for the user wasn't
great.
@melix melix added the type: bug Something isn't working label Sep 25, 2024
@melix melix added this to the 2.6.2 milestone Sep 25, 2024
@melix melix requested a review from alvarosanchez September 25, 2024 12:49
Copy link

Copy link
Member

@alvarosanchez alvarosanchez left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does this deserve any further investigation in core? @yawkat @dstepanov @graemerocher

@dstepanov
Copy link

In 4.6 @graemerocher added a check to prevent accessing the bean context when it's shutdown.
Which code is trying to access the context after it's been shutdown or in process?

@graemerocher
Copy link
Contributor

correct. The context used to be more forgiving in terms of allowing access to beans even after shutdown. That is no longer the case and it was a bug that it was possible. That said this seems to be related to our lack of proper support for graceful shutdown which needs to be correctly ideally in 4.7

@alvarosanchez
Copy link
Member

So then, should this logic be handled with a server shutdown event listener, instead of a custom controller method?

@graemerocher
Copy link
Contributor

not sure how an event listener would help

@melix melix merged commit 8616bf5 into 2.6.x Sep 26, 2024
11 checks passed
@melix melix deleted the cc/fix-shutdown branch September 26, 2024 15:52
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
type: bug Something isn't working
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants