-
Notifications
You must be signed in to change notification settings - Fork 382
Use Generation instead of checksum to detect changes in spec #1095
Comments
This should definitely be done before moving the API to beta, so I have set the 0.1.0 milestone. |
@pmorie was there a resolution on whether Brokers can be Generation or not? |
@duglin I think there's no reason why Broker cannot use generation. |
design proposal coming |
Proposed changes
Special considerations for BrokerThis is a touchpoint with issue #705 for broker relist API surface. Brokers must support manual relist. There should be a field added for type BrokerSpec struct {
// other fields omitted
RelistRequests int `json:"relistRequests"`
}
This field should have a default value of zero, and should only be allowed to increase by one. These are somewhat arbitrary semantics, but they seem to be the simplest thing that can work. |
NeedsRelist bool `json:"needsRelist"` |
ReconciledGeneration or ObservedGeneration? I see Observed in Deployment. I'm assuming we'll have code that will detect a request to modify NeedsRelist and aside from doing the "is it only +1" check you mentioned, if that passes, then that will force the Generation value to be different in some way from ReconciledGeneration, right? trying to think thru the case of two requests to relist coming in at the same time. Both will send the new ordinal value and both will have the correct Version value but I suspect only one will win. The other will fail causing them to re-GET and resend with the next ordinal value of NeedsRelist (+1 above other request). I think that's ok. Plus if the failed requester checks the error and see it failed due to the +1 check and not the Version mismatch then it can know it failed because someone else had already asked for a relist. |
@pmorie I am curious though, was there an issue with the |
I'm fine with either. |
For the record, no issue with the approach other than a little extra code, but we seem to be converging on a subresource that bumps |
If we have a bool, then the controller needs to unset the bool when it's done... which causes another reconcile because the spec changed. If we have an incremental counter, the controller doesn't have to do anything, and the resource behaves like other resources (spec change results in re-reconcile, controller doesn't have to mutate spec, etc). |
ah, missed the causing another reconcile... thanks |
Actually I remember now why I suggested So, specific examples synchronous provision
asynchronous provision
Make sense? In the end, I guess the name doesn't matter as much as making sure the semantics of the field are clear to users and programmers. |
yup - and it was for consistency that I was thinking we'd want to dup what other kube resources have. One question I have, and its related to what we we're talking about off-line, in your async case in step 7 what value does the controller use for state.reconciledGenerated? It can't just grab the latest value of "generation" it seems like it would need to cache the value at the time it started the reconciliation process. |
@duglin and I spent some time talking about asynchronous operations this weekend, and we realized that there is a need to store the generation that an asynchronous operation is for. This applies to instances only currently, but would apply to bindings (or, ahem |
There is a field in
ObjectMeta
calledGeneration
that represents a specific version of an object's spec.This field is used to solve the 'have i reconciled this version of this object's spec yet?' problem as follows:
Generation
field when the the spec changesObservedGeneration
that contains the last successfully reconciled value ofGeneration
Generation
andStatus.ObservedGeneration
match; if they match there is no work to do.This seems like a much better solution than using a checksum, and has more precedent in the existing API. I propose that we adopt this for
Instance
andBinding
types. I believeBroker
should use the same pattern, but there are complications, as I have written about here.The text was updated successfully, but these errors were encountered: