-
Notifications
You must be signed in to change notification settings - Fork 24.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
When stopping via systemd only kill the JVM, not its control group #25195
When stopping via systemd only kill the JVM, not its control group #25195
Conversation
I checked whether this also affects the old-style init.d script and it doesn't. That script already only kills the JVM and relies on the JVM to do whatever other killing is necessary. |
The problem this fixes was reported in this forum thread: https://discuss.elastic.co/t/disabling-machine-learning-does-not-allow-elasticsearch-to-stop/88869 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM. I left a comment about the comment, I trust your judgement as far as addressing. I also left you another comment via another channel.
@@ -52,6 +52,9 @@ TimeoutStopSec=0 | |||
# SIGTERM signal is used to stop the Java process | |||
KillSignal=SIGTERM | |||
|
|||
# Send the signal only to the JVM rather than its process group |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To be pedantic, this should say control group.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks, I changed the comment
1fb3d34
to
c497cdf
Compare
This prevents possible race conditions between the Elasticsearch JVM and plugin native controller processes that can cause the Elasticsearch shutdown to hang. The problem can happen when the JVM and the controller process receive a SIGTERM at almost the same time. (There's an assumption here that Elasticsearch will continue to use other mechanisms to kill native controller processes.)
c497cdf
to
5cbd6da
Compare
…25195) This prevents possible race conditions between the Elasticsearch JVM and plugin native controller processes that can cause the Elasticsearch shutdown to hang. The problem can happen when the JVM and the controller process receive a SIGTERM at almost the same time. (There's an assumption here that Elasticsearch will continue to use other mechanisms to kill native controller processes.)
…25195) This prevents possible race conditions between the Elasticsearch JVM and plugin native controller processes that can cause the Elasticsearch shutdown to hang. The problem can happen when the JVM and the controller process receive a SIGTERM at almost the same time. (There's an assumption here that Elasticsearch will continue to use other mechanisms to kill native controller processes.)
…25195) This prevents possible race conditions between the Elasticsearch JVM and plugin native controller processes that can cause the Elasticsearch shutdown to hang. The problem can happen when the JVM and the controller process receive a SIGTERM at almost the same time. (There's an assumption here that Elasticsearch will continue to use other mechanisms to kill native controller processes.)
* master: (27 commits) Refactor TransportShardBulkAction.executeUpdateRequest and add tests Make sure range queries are correctly profiled. (elastic#25108) Test: allow setting socket timeout for rest client (elastic#25221) Migration docs for elastic#25080 (elastic#25218) Remove `discovery.type` BWC layer from the EC2/Azure/GCE plugins elastic#25080 When stopping via systemd only kill the JVM, not its control group (elastic#25195) Remove PrefixAnalyzer, because it is no longer used. Internal: Remove Strings.cleanPath (elastic#25209) Docs: Add note about which secure settings are valid (elastic#25212) Indices.rollover/10_basic should refresh to make the doc visible in lucene stats Port support for commercial GeoIP2 databases from Logstash. (elastic#24889) [DOCS] Add ML node to node.asciidoc (elastic#24495) expose simple pattern tokenizers (elastic#25159) Test: add setting to change request timeout for rest client (elastic#25201) Fix secure repository-hdfs tests on JDK 9 Add target_field parameter to gsub, join, lowercase, sort, split, trim, uppercase (elastic#24133) Add Cross Cluster Search support for scroll searches (elastic#25094) Adapt skip version in rest-api-spec/test/indices.rollover/20_max_doc_condition.yml Rollover max docs should only count primaries (elastic#24977) Add remote cluster infrastructure to fetch discovery nodes. (elastic#25123) ...
* master: (44 commits) Upgrade icu4j for the ICU analysis plugin to 59.1 (elastic#25243) move assertBusy to use CheckException (elastic#25246) Use SPI in High Level Rest Client to load XContent parsers (elastic#25098) [TEST] test that low level REST client leaves path untouched (elastic#25193) Speed up PK lookups at index time. (elastic#19856) [Docs] Fix documentation for percentiles bucket aggregation (elastic#25229) Upgrade to lucene-7.0.0-snapshot-92b1783. (elastic#25222) Build: Add master flag for disabling bwc tests (elastic#25230) Scripting: Rename SearchScript.needsScores to needs_score (elastic#25235) Support script context stateful factory in Painless. (elastic#25233) FastVectorHighlighter should not cache the field query globally (elastic#25197) Remove QUERY_AND_FETCH BWC for pre-5.3.0 nodes (elastic#25223) Add more missing AggregationBuilder getters (elastic#25198) Extract the snapshot/restore full cluster restart tests from the translog full cluster restart tests (elastic#25204) Refactor TransportShardBulkAction.executeUpdateRequest and add tests Make sure range queries are correctly profiled. (elastic#25108) Test: allow setting socket timeout for rest client (elastic#25221) Migration docs for elastic#25080 (elastic#25218) Remove `discovery.type` BWC layer from the EC2/Azure/GCE plugins elastic#25080 When stopping via systemd only kill the JVM, not its control group (elastic#25195) ...
This prevents possible race conditions between the Elasticsearch JVM and
plugin native controller processes that can cause the Elasticsearch shutdown
to hang. The problem can happen when the JVM and the controller process
receive a SIGTERM at almost the same time.
(There's an assumption here that Elasticsearch will continue to use other
mechanisms to kill native controller processes.)