-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Externalize provider specific specs and status in separated CRDs #833
Comments
@pablochacin would you be willing to defer discussion on this while we plan out the organization we discussed in today's community around roadmap items, and using KEPs for things like this instead of individual issues? |
@ncdc Sure. My intention with this issue was to put together ideas that I had expressed as comments in other places (issues, documents) and facilitate the discussion. |
I love a lot of things about this approach and think it could be very successful. Some thoughts: Considering a provider "Foo", I would make a FooMachine CRD with FooMachineSpec and FooMachineStatus types, just like any other normal CRD. Is that what you are proposing? Then from a Machine I would reference a FooMachine resource via an Enabling the cluster-api MachineController to watch It's not clear to me how best to handle this from a MachineSet. How would we templatize the FooMachine? We could have a MachineSet reference its own copy of a FooMachine, and have it make a copy of that for each Machine it creates. Other ideas? Lastly, I'll just correct the perception of how |
@mhrivnak thanks for the clarification regarding how metakube uses the CRD. |
@mhrivnak Your suggestion about using a generic controller watching the provider CRD seams interesting. I don't know if you are participating in the discussion about extension mechanism for the cluster-api. That would be a good place to present and discuss the idea. Interested? |
/area api |
@vincepri @timothysc I would like to work on this issue (seems I cannot assign to my self, can I?) However, I have two questions:
|
@pablochacin would you be willing to write up a proposal for the changes to the |
@ncdc Yes, I would be interested. Now, I expect the cluster data model to change significantly, according to what we discussed in the data model workstream. For instance, the references to infrastructure and control plane objects. So it's timely to do this change? If so, I'm in. Regarding the machine part, I'm not sure how to approach this change in coordination with the proposal we have on the table. |
@pablochacin I am in favor of including this for v1alpha2 assuming:
|
If we want to do this in "small" steps, I'd suggest the only change we make for starters is the combination of removing the inline providerSpec and providerStatus fields (replacing them with object references to "cluster infrastructure" provider-specific CRDs, whatever they may look like per provider) and switching from a cluster Actuator to a cluster infrastructure controller. I think this would get rough alignment/coordination with #997. A possible next step after this, maybe for v1alpha3, could be to further break up the data model into infrastructure vs control plane vs other things. |
@ncdc sounds like a plan. |
I'm going to mark this p0 and move it to the v1alpha2 milestone as this issue covers both Machine and Cluster providerSpec/Status and the current plan is at least to tackle the fields in Machine for v1alpha2. And if we can get the proposal for the Cluster changes approved & have someone sign up to do it, 👍! If we need to split this up so we have 1 issue for Cluster, and a separate for Machine, please let me know @timothysc. /milestone v1alpha2 |
/assign |
@pablochacin @ncdc - could folks please update this issue. |
Pablo to write a proposal. |
/reopen The proposal has merged but we haven't modified the code yet. That's happening in #1177 |
@ncdc: Reopened this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/kind feature
Describe the solution you'd like
Presently,
Cluster
andMachine
CRDs include an opaque representation of provider specific description of these resources and their status:An alternative approach would be to use CRDs for provider specific specs and status and keep a
ObjectReference
With respect of the provider specific status, the ClusterAPI controllers should watch the provider CRD and update the ClusterAPI CRD status if a change is detected.
Pros:
clusterctl
could use a plugin approach for handling provider specific specs and statusCons:
Anything else you would like to add:
The text was updated successfully, but these errors were encountered: