-
Notifications
You must be signed in to change notification settings - Fork 26
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
retryOnTimeout / retryBackoff: #110
Comments
I found a way to disable by overload Transport class with a custom Class
however I would like to understand if features added are really correct? |
From my point of view we must have retrybackoff only when we are on same failure node and retry should always be apply |
Creating a custom transport is indeed the appropriate way to override these features for now. Are you saying that you would expect timeouts to be retried when there are nodes in the pool on which the request has not yet been tried? |
yes exactly and in this case we dont need to apply backoff |
I'm not sure we should ever retry on timeout unless But what I do see you saying is that Does that solution make sense to you? |
the change on retryOnTimeout default = false is a bc from my point of view, so this need to be define into the change log in minima. |
The |
On latest version of elastic-transport library, 2 new features are introduce
d2956c2
(min: number, max: number, attempt: number): number {
const ceiling = Math.min(max, 2 ** attempt) / 2
return ceiling + ((Math.random() * (ceiling - min)) + min)
}
https://github.com/elastic/elastic-transport-js/pull/101/files
This break how it should work on cluster.
🐛 Bug Report
if you have 3 elasticsearch node with maxretries = 4, if one query exit in timeout on node 1 , the connection pool will query the timeout query on node 2 and node 3 if need.
With this update, the query is run only on node 1.
Moreover this parameter cannot be updated on elastic client in order to keep how is running before.
Your Environment
@elastic/elasticsearch
8.14.0The text was updated successfully, but these errors were encountered: