-
Notifications
You must be signed in to change notification settings - Fork 24.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Elasticsearch is fsyncing on transport threads #51904
Labels
>bug
:Distributed Indexing/CRUD
A catch all label for issues around indexing, updating and getting a doc by id. Not search.
Comments
Tim-Brooks
added
>bug
:Distributed Indexing/CRUD
A catch all label for issues around indexing, updating and getting a doc by id. Not search.
labels
Feb 5, 2020
Pinging @elastic/es-distributed (:Distributed/CRUD) |
Relates #39793 (comment) |
It seems to me this issue would be automatically resolved by #51035 if we went with simply bounding the in-flight bulk requests and made rejections on the write pool impossible that way? |
Tim-Brooks
added a commit
to Tim-Brooks/elasticsearch
that referenced
this issue
Feb 5, 2020
Currently the shard bulk request can be rejected by the write threadpool after a mapping update. This introduces a scenario where the mapping listener thread will attempt to finish the request and fsync. This thread can potentially be a transport thread. This commit fixes this issue by forcing the finish action to happen on the write threadpool. Fixes elastic#51904.
Tim-Brooks
added a commit
that referenced
this issue
Feb 15, 2020
Currently the shard bulk request can be rejected by the write threadpool after a mapping update. This introduces a scenario where the mapping listener thread will attempt to finish the request and fsync. This thread can potentially be a transport thread. This commit fixes this issue by forcing the finish action to happen on the write threadpool. Fixes #51904.
Tim-Brooks
added a commit
to Tim-Brooks/elasticsearch
that referenced
this issue
Feb 18, 2020
Currently the shard bulk request can be rejected by the write threadpool after a mapping update. This introduces a scenario where the mapping listener thread will attempt to finish the request and fsync. This thread can potentially be a transport thread. This commit fixes this issue by forcing the finish action to happen on the write threadpool. Fixes elastic#51904.
Tim-Brooks
added a commit
that referenced
this issue
Feb 25, 2020
Currently the shard bulk request can be rejected by the write threadpool after a mapping update. This introduces a scenario where the mapping listener thread will attempt to finish the request and fsync. This thread can potentially be a transport thread. This commit fixes this issue by forcing the finish action to happen on the write threadpool. Fixes #51904.
Tim-Brooks
added a commit
that referenced
this issue
Feb 25, 2020
Currently the shard bulk request can be rejected by the write threadpool after a mapping update. This introduces a scenario where the mapping listener thread will attempt to finish the request and fsync. This thread can potentially be a transport thread. This commit fixes this issue by forcing the finish action to happen on the write threadpool. Fixes #51904.
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Labels
>bug
:Distributed Indexing/CRUD
A catch all label for issues around indexing, updating and getting a doc by id. Not search.
It is currently possible for a cluster state listener to execute a transport fsync.
TransportShardBulkAction
it is possible that a shard operations will trigger a mapping update.ClusterStateObserver.Listener
to continue when the mapping is complete.onRejection
callback will fail outstanding operations and complete the request (probably trying to notify of the operations that were able to be completed).TransportShardBulkAction
will attempt to fsync or refresh as necessary after initiating replication.Here is a transport_worker stack trace. I also think these listeners might be executed on cluster state threads?
The text was updated successfully, but these errors were encountered: