-
Notifications
You must be signed in to change notification settings - Fork 144
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
watch for changes in trusted CA configmap #113
Conversation
/test e2e-aws-upgrade |
/hold |
/test e2e-aws-upgrade |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just some docs and logging requests, otherwise looks good.
|
||
var _ reconcile.Reconciler = &ReconcileConfigMap{} | ||
|
||
// ReconcileConfigMap reconciles a ConfigMap against the on-disk contents. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Lets add a little more clarification here that this is for a very specific configmap we want to watch to determine if we should shut down. The controller is quite generically named and you need to go straight to the reconcile loop to understand why we're watching configmaps.
type ReconcileConfigMap struct { | ||
client.Client | ||
logger log.FieldLogger | ||
configMapDataHash string |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is persistent state across reconcile loops isn't it. That is interesting, we don't have a lot of examples of that. It looks risky for concurrency, but then we're only reconciling on one single config map. Looks like it should be ok to me.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, persistent across reconcile calls. And as you say, since we're really only watching one single configmap (openshift-cloud-credential-operator/cco-trusted-ca), this should be safe.
cmHash := fmt.Sprintf("%x", md5.Sum([]byte(cm.Data[configMapKeyName]))) | ||
|
||
if r.configMapDataHash == "" { | ||
r.logger.Info("Saving hash of configmap containing CA data") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When initializing r.logger lets add the configmap namespace/name, and the key as context fields.
Maybe "saving hash of proxy CA configmap".
r.logger.Info("Saving hash of configmap containing CA data") | ||
r.configMapDataHash = cmHash | ||
} else if r.configMapDataHash == cmHash { | ||
r.logger.Debug("no change in configmap detected") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"no change in proxy CA configmap detected"
} else if r.configMapDataHash == cmHash { | ||
r.logger.Debug("no change in configmap detected") | ||
} else { | ||
r.logger.Info("Configmap change detected, restarting pod") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"proxy CA configmap change detected, restarting pod"
in order to have the pod run with an up-to-date set of trusted CAs, watch the configmap containing the pods's CAs, and if the contents change, exit the container, so that we get restarted with the new CA content.
/hold release |
/hold cancel |
06b8b07
to
1c9a82c
Compare
/lgtm |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: dgoodwin, joelddiaz The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Do we need this? From here:
|
@wking If memory serves me correctly, the way I understand this is that the configmap changes eventually make it into the container, but the database of trusted CAs wouldn't be reflected in the controller without rebuilding the trusted CA database. |
Ah, yeah, turns out that Go caches the system store on first load and then never reloads the system pool. I'll see if I can get some motion on that upstream. |
in order to have the pod run with an up-to-date set of trusted CAs, watch the configmap containing the pods's CAs, and if the contents change, exit the container, so that we get restarted with the new CA content.