-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
All of a sudden, I'm receiving this error #390
Comments
Could you please enable the debug mode ( |
Ok, I will keep you updated |
So today it happened again and this is the log with DEBUG: https://gist.github.com/thelinuxlich/365b8cafc98295b14077a38a44a949a7 |
When a command is rejected with error |
I'm configuring it to connect with all the 6 nodes, and when this happens, I have to restart the whole process :( |
Some more info: I'm using Node 5.5 and the redis operations consist of GET, SET, SETEX and EVALSHA scripts(using defineCommand) |
Is there anymore info I can provide you to help pinpoint the issue? |
@thelinuxlich Is this issue related with your environment? If the issue cannot be reproduced with general setups, it would be very helpful to provide the docker files so I can debug the problem locally in exact the same environment as yours. Otherwise is it possible to help me narrow down the possibility where the problem happens. |
I'm testing with older ioredis versions, let's see |
Interesting, ioredis 2.0.0 is doing okay |
That's strange since there's not much difference between the two versions: v2.0.0...v2.4.0. Hmm... |
Now I can confirm that switching to ioredis 2.0.0 totally mitigates the problem |
@thelinuxlich Could you please also test with ioredis v2.1.0 & v2.2.0 so we can know which version was the issue introduced. Thanks a lot 😆 |
After days of testing, can confirm this happens in ioredis 2.2+ |
That means 2.1.0 doesn't have the problem? Looks like there's no much difference in cluster mode between 2.1.0 & 2.2.0: v2.1.0...v2.2.0. Hmm... |
To me it looks like cluster misconfiguration. the difference is that ioredis create a new node based on report from a cluster node, which tells to ask some new node not from the list - it tries to ask that node, gets another redirect and so on. One would assume that that nodes report wrong ip addresses for some reason and that leads to redirection loops Other possibility is that response is parsed incorrectly? But I assume it's not the case though |
Well, it can't be misconfiguration because I've been using ioredis for more than a year without changes in configuration and all of a sudden when I update from 2.0 to 2.4 this began to happen. |
@thelinuxlich maybe you are right. The place it can happen is |
Recently used Ioredis cluster in production and got the above-mentioned errors. Just want to check up on the status of the issue. Is it resolved in latest version? |
@Aditya-Chowdhry No, the issue hasn't been reproduced in my environment. |
Fixed link |
@AVVS Thanks for the update. How is this line of code related with the issue? |
In my case, I am only getting this same exact error repeatedly
where the cluster consists of only 4 masters, ports - 6389,6390,6391,6392 I have another cluster consisting of master & slaves for which I am not getting any error. |
@Aditya-Chowdhry Was 6389 down while getting the error? Is the issue reproducible? |
@luin No 6389 was not down. I am not able to reproduce it locally. Debugging further. Will get back if found something ASAP. |
@luin Got the issue. This error that I posted comes when null cache key is passed. I wrote a rough small code to reproduce the issue.
|
Before this fix, empty key names (""/null/undefined) was sent to random nodes. That should not be the case when key name is an empty string. Related with #390. Thank Aditya-Chowdhry for addressing this issue!
@Aditya-Chowdhry Just fixed this issue in v3.2.1. Thank you for the example, that helps! |
@luin one question: why when key |
@tuananh |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed after 7 days if no further activity occurs, but feel free to re-open a closed issue if needed. |
I'm seeing this issue with 3.2.2 in our production environment with 3 redis nodes (different IP's) |
I have a 6 nodes setup, I saw this in the log of a node:
I think ioredis is not recovering well alongside the redis cluster(3.2)
The text was updated successfully, but these errors were encountered: