-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Can't connect to cluster via NAT with enableReadyCheck option #930
Can't connect to cluster via NAT with enableReadyCheck option #930
Comments
There isn't debug info about Another strange thing is the cluster slots command returns two same nodes:
Which seems to be the problem. Did your nat mapper map different nodes into the same one? |
I fixed the natMap, however the issue still remains. natMap:
I added a few debugs to the The reason for using the DNS name is as recommended in Amazon ElastiCache FAQs:
In any case, even if I change the napMap to use the ip rather than hostname, it still continually tries to reconnect in a loop. Can you see any other things I should try?
It still works fine if I set |
Thanks for the details. I've spotted the root reason for this issue. The issue is caused by the ready check command being sent to the node (52.4.26.14) that was disconnecting, instead of the new node ( I'll create a fix for that this week. For now, you can safely turn |
🎉 This issue has been resolved in version 4.12.2 🎉 The release is available on: Your semantic-release bot 📦🚀 |
## [4.12.2](redis/ioredis@v4.12.1...v4.12.2) (2019-07-16) ### Bug Fixes * **cluster:** prefer master when there're two same node for a slot ([8fb9f97](redis/ioredis@8fb9f97)) * **cluster:** remove node immediately when slots are redistributed ([ecc13ad](redis/ioredis@ecc13ad)), closes [#930](redis/ioredis#930)
This has been happening for a while, but we just set
enableReadyCheck
(connection option) tofalse
so it's not super urgent.I have reproduced this locally and running in lambda, but only when connecting via a NAT server. It continually fails and calls
clusterRetryStrategy
, never gaining aready
state.I have been able to fix this with a small change to the built ioredis code in
cluster/index.js:153
by simply surrounding the call to the readyCheck withprocess.nextTick
:I don't know why this works. I'm hoping either this will help you make a better fix than this, or if you are happy with this fix I am happy to submit a PR.
ioredis 4.11.2 (+ previous versions)
The text was updated successfully, but these errors were encountered: