-
Notifications
You must be signed in to change notification settings - Fork 43
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Return re-connection failures immediately #159
base: master
Are you sure you want to change the base?
Conversation
A new testcase is added which simulates a temporary network problem which triggers node reconnects to fail. Update clusterclient to skip empty lines and comments and limit the maxretry to 2 (default 5).
redisReply *reply = (redisReply *)redisClusterCommand(cc, cmd); | ||
if (cc->err) { | ||
fprintf(stderr, "redisClusterCommand error: %s\n", cc->errstr); | ||
exit(101); | ||
printf("error: %s\n", cc->errstr); | ||
} else { | ||
printf("%s\n", reply->str); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So now we can continue to use the cluster context after a reconnect failure? Sending queries still works as long as it's not the failing node? What would happen if we'd do that without this change? Would it crash the program?
How does redisClusterSetOptionMaxRetry affect the behaviour?
I think the PR description lacks some details about behaviour before and after.
redisReconnect(c); | ||
if (redisReconnect(c) != REDIS_OK) { | ||
__redisClusterSetError(cc, c->err, c->errstr); | ||
return NULL; | ||
} | ||
|
||
if (cc->ssl && cc->ssl_init_fn(c, cc->ssl) != REDIS_OK) { | ||
__redisClusterSetError(cc, c->err, c->errstr); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I saw this too but I thought ssl_init_fn
is supposed to handle a connection which failed to reconnect.
It seems cleaner to return ASAP though, but I'd like to understand the consequences better.
# A reconnect failure triggers a search for an available node | ||
EXPECT ["PING"] | ||
SEND +PONG | ||
EXPECT ["SET", "foo", "second"] | ||
SEND -MOVED 12182 127.0.0.1:7402 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, looking at the commits one by one explains it. :)
Legacy: After N failed reconnect attempts, it sends the command to a random node and gets a redirect (after PING), then slot map update? Then the same thing is repeated M times. I don't get it exactly. Is maxretry = N or maxretry = M?
Includes a new testcase in a separate commit which visualize legacy behavior.
The testcase simulates a temporary network problem which triggers node reconnects to fail.