-
Notifications
You must be signed in to change notification settings - Fork 437
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Limit retry backoff (and unlimited retries) #750
Comments
I was looking into this a little. There does seem to be an upper limit to the back off time which is 24 hours. This value is also automatically applied after 32 attempts (due to overflow). Did you want to make this configurable instead of it being a constant? |
Yes, making it configurable would be great. We would want to configure the maximum to be something like 5-10 minutes. |
Hey so I created a pull request for this change. #756. Please verify if this is how you imagined it. If its good I can ask the maintainers to have a look. |
Yes, that looks perfect. Thank you! |
The existing retry options are a bit limiting:
When the elasticsearch cluster is inaccessible (e.g. a network interruption), we want the connector to be resilient to that situation and resume operation when connectivity to elasticsearch is reestablished. And we want it to reconnect in a timely manner.
So first of all, it would be nice to allow a way to set max.retries to "unlimited". Granted, the max value of 2147483647 is pretty darn large, and probably going to be enough in practice.
But more important, we really need to put an upper limited on the retry backoff. It is nice that it is designed to "wait up to twice as long as the previous wait", but if elasticsearch is down for a few hours that would let the backoff time grow to an unacceptably long interval.
We could really use an option to cap the growth of the backoff to some maximum value.
The text was updated successfully, but these errors were encountered: