-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
InstallConfig Cannot Be Embedded in Kube Types Due to missing DeepCopyInto Methods #256
Comments
Additionally the use of net.IPNet looks to be a problem in Kube for the same reason, missing DeepCopyInto methods. Will need to change this in InstallConfig, or fork implementations somehow. |
Rewrote the original issue here with more detail and proposals. |
This seems like low-hanging fruit. Any objections to me renaming while we work out the rest? From your hosting choices, I'd greedily prefer (1) (we keep it) or (4) (has its own repo), at least for the short term. The others feel like they'd make it harder to evolve while we work out the kinks. Once the structure stabilizes, I have less of a preference. |
(1) does sound ok for now provided we get it fully functional as a kube type. Then again I'm not sure how much we can move it around if people start depending on it's apigroup. I would suggest: InstallConfig:
Then ClusterConfig is the type we need officially Kube ready. That Admin section doesn't look well suited for a kube type, we want secrets for all that. |
The InstallConfig exists in the cluster as source of install time decisions. Renaming it to ClusterConfig means users will be inclined to make change this to make changes to the running cluster. I'm not sure that's going to be true . |
It's definitely got to be true for Hive, so would nesting ClusterConfig within InstallConfig help clarify that? Then we just vendor in and use ClusterConfig. |
Are there docs on what methods we need to implement to be a Kubernetes object? I was hunting around for a |
I think it's basically this: https://github.com/kubernetes-sigs/cluster-api/blob/master/pkg/apis/cluster/v1alpha1/zz_generated.deepcopy.go And afaik you don't want to maintain it by hand, rather use the kubernetes tools to auto generate. |
Devan needs DeepCopyInto on IPNet as part of embed our types into Hive's ClusterDeployment CRD [1]. You could generate these with deepcopy-gen [2], but setting up automatic code generation is a bit heavy for just these methods [3], so we're hand-coding for now. [1]: openshift#256 (comment) [2]: https://godoc.org/k8s.io/gengo/examples/deepcopy-gen [3]: openshift#276 (comment)
Per call today I think we're going to fork and develop Hive specific types that will generate InstallConfig for when we call the installer. Closing for now. |
This reverts commit 06739c6, openshift#276. Devan's forking the config in Hive [1], so they no longer need us to support embedding in Kubernetes objects. [1]: openshift#256 (comment)
I've been working on re-using InstallConfig within Hive's ClusterDeployment CRD and run into some problems that could use discussion:
Should the canonical source of the cluster config type live in the Installer? Would the config ever contain options that the installer ignored or did not act on? (and perhaps Hive or other actors would?) (I think the answer is probably "no" here)
How can we share defaulting and validation code? Ideally we want to inform an API caller their config is not valid without relying on the Installer to fail in a pod we're running. Quicker feedback will be important and ideally we should all be sharing the same code to do so.
Is InstallConfig appropriately named? Per last arch call we agreed it's not just an install time thing. Would ClusterConfig be more accurate?
Some options to clean this up:
(1) Keep InstallConfig in Installer. Hookup Kubernetes code gen, commit to all the guarantees required for the type going forward: treat as an externally facing API object that must to be greater than or equal to the serialization of any embedding format.
(2) Hand it over to Hive, lets us maintain the Kube generation and API contract, vendor into your repo. (I would propose a breakout of something like ClusterConfig in the Hive repo, and InstallConfig remaining in your repo and having fields ClusterConfig and Admin (which doesn't map nicely to kube secrets we'd use for this info)).
(3) Place ClusterConfig definition directly into the core OpenShift API server. I don't know if this would fly but I kind of liked the idea, it makes it very official, we get API server for free and better options for versioning, validation and defaulting than CRDs, and it would signify how hard it is, or should be to change.
(4) Spin into a separate project and repo we all vendor.
Open to other suggestions. Please let me know what you think.
The text was updated successfully, but these errors were encountered: