-
Notifications
You must be signed in to change notification settings - Fork 366
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(redis): Add pod label of redis role, to support Master/Slave model. #419
Conversation
Set redis role label for redis pod, so client can connect to master directly. Also, It can find the master or slave pod directly, by using `kubectl get po -o wide --show-labels`. In this way, the redis cluster can support sentinel model and master/slave model at the same time.
Set redis role label for redis pod, so client can connect to master directly. Also, It can find the master or slave pod directly, by using `kubectl get po -o wide --show-labels`. In this way, the redis cluster can support sentinel model and master/slave model at the same time.
Set redis role label for redis pod, so client can connect to master directly. Also, It can find the master or slave pod directly, by using `kubectl get po -o wide --show-labels`. In this way, the redis cluster can support sentinel model and master/slave model at the same time.
Set redis role label for redis pod, so client can connect to master directly. Also, It can find the master or slave pod directly, by using `kubectl get po -o wide --show-labels`. In this way, the redis cluster can support sentinel model and master/slave model at the same time.
@ese we need this feature.too |
@shangjin92 Thanks for the contribution and sorry for the feedback times. @jiuker I will be taking care of the contributions and pushing a release cycle from second-week august to the end of the month. |
@ese any updates on this? |
I'm ok with it, just take in account it could have delayed updates when a master failover happens since the reliable mechanisms we trust for it is sentinel. |
@shangjin92 @jiuker I would totally agree with @ese to reliably configure endpoint, it should be done using the querying sentinel or watch for events from sentinels. We do this...
|
LGTM |
I tried this on our Cluster by adding a Service targeting the master pod. Im curious if it is within the scope of the operator to solve this more seamlessly by watching sentinel events or is this expected to always be handled outside of the operator? Using the current approach it seems like a failover (by deleting the master pod) sometimes results in 0 downtime and sometimes a downtime of about ~10s according to my tests (a loop where I |
Set redis role label for redis pod, so client can connect to master directly by adding redis-master-service. Also, It can find the master or slave pod directly, by using
kubectl get po -o wide --show-labels
. In this way, the redis cluster can support sentinel model and master/slave model at the same time.