-
Notifications
You must be signed in to change notification settings - Fork 9.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[redis-cluster] All nodes turned to master #5431
Comments
Hi, Thank you for using Bitnami. Could you provide the values that you used for deploying the chart? |
|
Hi, We've been investigating this issue in the past, and we added several fixes to avoid this situation. Could you confirm that the issue did not occur at the initial startup? I guess this could be related to the nodes restart. |
Yes This might be related to the redis version? |
Please let us know if the incident happens again and the reason of the shutdown. We want to avoid issues like that and want to find out the cause of this issue. |
Hi Then I ran this: This added two new nodes, then started to restart some of the other nodes in the cluster and they all changed to master Not sure this is 100% reproducable but if you can't reproduce it I'll be happy to try to get it to happen again |
Hi, We are unable to reproduce it. Could it be that something crashes during the first initialization? Do you see any crash before turning them to master? |
This Issue has been automatically marked as "stale" because it has not had recent activity (for 15 days). It will be closed if no further activity occurs. Thanks for the feedback. |
Hey , i also got this problem in my staging environment |
Hi, Could you provide more details about the chart version you are using and the Kubernetes platform? Do the logs show anything meaningful? |
This Issue has been automatically marked as "stale" because it has not had recent activity (for 15 days). It will be closed if no further activity occurs. Thanks for the feedback. |
Due to the lack of activity in the last 5 days since it was marked as "stale", we proceed to close this Issue. Do not hesitate to reopen it later if necessary. |
@naveedsyed1746 Let's continue the conversation at #5418 |
Which chart:
redis-cluster-4.2.3
Describe the bug
We have a cluster of 12 nodes - 6 masters and 6 slaves.
We had some errors in our system and looking at the redis I saw that there was information missing.
I wanted to match the master and salve to see if the data exists in either but all the nodes are masters
To Reproduce
I'm not really sure how to reproduce this. Some of our pods did get restarted a few times so it might be related.
Expected behavior
have 6 masters and 6 slaves
Version of Helm and Kubernetes:
helm version
:kubectl version
:Additional context
logs from a pod that did get restarted:
output for cluster nodes:
logs from a pod that didn't get restarted:
The text was updated successfully, but these errors were encountered: