-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Pull NetworkConfig logic out of Revision controller #1242
Comments
DomainConfig uses a similar pattern. I have some thoughts on what this might look like, so I may take a look at it this week (if @mdemirhan has not already started on it). /unassign @mdemirhan |
It sounds (even from the docs) like ConfigMap updates through volumes can be very laggy:
tl;dr I think I'd propose a simple informer-based configuration model. I think we'd have something like package configurationmanager
type Watcher func(*corev1.ConfigMap)
type Interface interface {
Watch(name string, cb Watcher)
}
func New(informer) Interface {...} Then Controller's would watch the ConfigMaps that they care about: package route
func New(..., cm configurationmanager.Interface) controller.Interface {
...
// Parse the ConfigMap into DomainConfig, see what's changed and do what setDomainConfig does today.
cm.Watch("domain-config", c.receiveDomainConfig)
} package revision
func New(..., cm configurationmanager.Interface) controller.Interface {
...
// Parse the ConfigMap into DomainConfig, see what's changed and do what setDomainConfig does today.
cm.Watch("network-config", c.receiveNetworkConfig)
} This is in essence a generalization of what exists through informers directly today, but hides some of that complexity behind a higher-level interface (that we can fake for testing). This model should avoid the lag of depending on the ConfigMap volumes, and also enable us to be aware of when we've picked up updates to trigger additional work. For example, when we pick up new sidecar images, we may want to trigger a global reconciliation to roll it out (assuming a world where we reconcile the underlying Deployment). |
Yep, this makes sense to me. |
I put together the linked PR that implements what's outlined above for Ideally we'd have a single method for handling our Controller configs (including |
* Refactor the way our Controllers watch ConfigMaps Previously the Revision and Route controllers used a ConfigMap informer directly to watch for changes. This generalizes that pattern into a slightly higher level abstraction under `pkg/configmap`. With this `configmap.Watcher` abstraction, Controllers simply register to `Watch` particular `ConfigMap`s for changes via a callback: ```go cm.Watch("configmap-name", c.receiveSomeConfig) ``` Once `Start()` is called on the `configmap.Watcher` all of the registered `configmap.Observers` will be invoked with the initial state of their `ConfigMap`s, and if any are not found the call to `Start()` will return an error. This consolidates the setup and update logic for the various configurations behind a single interface. This transitions the `NetworkConfig` and `DomainConfig` logic, which already employ this pattern (sans abstraction) to this abstraction. Fixes: #1242 * Drop the kubeclient-based constructors. Rename `network_config.go` to have a consistent filename with `domainconfig.go`. * Check that configmap.defaulImpl.Start is only called once. Also renamed defaultImpl's `watchers` field to `observers` to be consistent with the original intent. * Incorporate feedback from @dprotaso split Start into a handful of descriptively named helpers. Don't Start in a go routine, since it doesn't block on SharedInformerFactory.
Co-authored-by: Stavros Kontopoulos <[email protected]>
I'd rather have the controllers' use of informers focused on reconciliation. In the case of the
NetworkConfig
, I feel like we could do this by:SetNetworkConfig
on the revision controllercmd/controller/main.go
doing something like:The text was updated successfully, but these errors were encountered: