-
Notifications
You must be signed in to change notification settings - Fork 366
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
change update strategy #203
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Found some fixes!
P.S. share your ideas, feedbacks or issues with us at https://github.com/fixmie/feedback (this message will be removed after the beta stage).
service/redis/client.go
Outdated
@@ -301,3 +303,18 @@ func (c *client) getConfigParameters(config string) (parameter string, value str | |||
} | |||
return s[0], strings.Join(s[1:], " "), nil | |||
} | |||
|
|||
func (c *client) IsSyncing(ip, password string) (bool, error) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would use a function slaveIsReady
for the safety of update process. This can provide a wide overview of the cluster health checking the slaves were are more conditions than syncing state.
- master_host != 127.0.0.1 (pod bootstrap state, needs to be added to the cluster by operator)
- master_link_status:up
- master_sync_in_progress:0
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done :)
Co-Authored-By: Sergio Ballesteros <[email protected]>
Co-Authored-By: Sergio Ballesteros <[email protected]>
Co-Authored-By: Sergio Ballesteros <[email protected]>
Co-Authored-By: Sergio Ballesteros <[email protected]>
Co-Authored-By: Sergio Ballesteros <[email protected]>
Co-Authored-By: fixmie[bot] <44270338+fixmie[bot]@users.noreply.github.com>
-This can lead to problems, as the master can be relaunch when the new slave pods are still syncing
-So now the controller takes the update task, killing pod-by-pod and only when the syncing in all slaves is done.