-
Notifications
You must be signed in to change notification settings - Fork 211
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
issue #348 : added option to trigger cluster restart on static config change #349
Conversation
Signed-off-by: aparajita.singh <[email protected]>
Codecov Report
@@ Coverage Diff @@
## master #349 +/- ##
=======================================
Coverage 84.52% 84.52%
=======================================
Files 11 11
Lines 1376 1376
=======================================
Hits 1163 1163
Misses 140 140
Partials 73 73 Continue to review full report at Codecov.
|
@aparajita89 In my testing, observed that |
this is happening because removal of the annotation is causing pod restart with the new values set in the config. i'm checking if adding a pre-install hook to the helm chart to get the old value of the annotation and setting that is going to help fix the issue. |
Also |
@anishakj in a cluster where the checksum/config value is already set in the annotations, if we try to update the configs by setting restartOnConfigChange to false, the value of the annotation becomes null and this triggers the pod restart. to avoid this, i tried to pull the existing value of the checksum/config annotation into a helm variable, and then to set the same value to checksum/config if restartOnConfigChange is false but to compute the checksum again if it's true. |
changing the config value in the CRD would not change the value of the annotation. we would need to make changes to the operator code so that it watches for changes in the static configs and updates the checksum/config annotation in the pod spec. k8s would then trigger the pod restart since the value of the annotation has now changed. perhaps it would be more useful to abandon the idea of a cluster restart when static configs change, and to consider implementing a more generic rolling restart feature via the CRD. introduce a new field "triggerRollingRestart" as part of the CRD spec in case of a static config change, the CRD would be updated with the new static configs. this would not have any impact on the existing cluster. "triggerRollingRestart" can then be set to true to restart the pods on demand. |
Sure, This looks like a cleaner approach. |
will raise another PR for this |
Change log description
when static configs are changed (such as initTime, etc which go in zoo.cfg), the zookeeper cluster needs to be restarted in order for the configs to take effect. this change is to add a flag in the helm chart values to optionally trigger a cluster redeploy when static configs are modified.
Purpose of the change
Refer #348
What the code does
these changes will compute a sha256 hash of the values which go into zoo.cfg via the config map. the hash is set as the value for a custom annotation in the pod spec for the zookeeper nodes. whenever the helm chart is uprgaded, the hash will be recomputed and the annotation will be updated. this update to the annotation will trigger a pod restart for all the nodes in the zookeeper cluster.
How to verify it