-
Notifications
You must be signed in to change notification settings - Fork 296
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Incorporate kube-rbac-proxy image registry location into manifest generation #675
Comments
Hi @MnrGreg, So how you like to solve this solution? You seem ambivalent to include the option, but at the same time you opened this issue? |
Related to kubernetes-sigs/cluster-api#1767 |
Is this an issue in v1alpha2 or v1alpha3? If v1alpha3, can @fabriziopandini please chime in and kindly remind me if clusterctl v2 has the ability to apply JSON patches? |
@akutz the JSON patches are applied by kustomize during the manifest generation. |
Does that mean The point is that this person wants to use a custom version of It seems to me that if The CAPV manifest generator is very nice for users, but in v1alpha3 the manifest generator is at odds with |
@akutz I got your point and I kind of like the idea, so, if you/the user can open the issue in the CAPI repo we can rally on defining details (e.g. how to pass the patches to clusterctl/how to define to which providers the patch should apply) But just to set the expectations, from my point of view this is on top of was discussed with the CAEP/with the POC, so, unless someone steps in to help, I will jump on this at best effort/after everything else is completed.
Nothing prevents the users to use the manifest generator that exists in CAPV and pass the generated manifest to clusterctl as a local override. Quoting from kubernetes-sigs/cluster-api#1994 |
@akutz @fabriziopandini While the Manifest Generator worked well initially, in our use-case there are quite a few additional, common settings we need apply to the Workload clusters. We found ourselves applying Kustomize Patches on top of the CAPV Manifest generated templates in order to address these. Some examples:
Would the provider overrides in I was wondering if anyone had looked at the possibility of using just Kustomize for all the CAPV Workload Cluster templating. One could use A brief example: clustervars.env: VSPHERE_SERVER=vsphereserver.domain.local
VSPHERE_DATACENTER=west-1-prod
VSPHERE_DATASTORE=west-1-prod-lun01
VSPHERE_NETWORK=rck3236trunk kustomization.yaml: apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- cluster.yaml
configMapGenerator:
- name: vsphere-configs
envs:
- clustervars.env
patchesStrategicMerge:
- |-
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha2
kind: VSphereCluster
metadata:
name: generic-cluster-1
namespace: default
spec:
cloudProviderConfiguration:
network:
name: $(VSPHERE_NETWORK)
workspace:
datacenter: $(VSPHERE_DATACENTER)
datastore: $(VSPHERE_DATASTORE)
folder: vm
resourcePool: 'Infrastructure/Resources'
server: $(VSPHERE_SERVER)
server: $(VSPHERE_SERVER)
vars:
- name: VSPHERE_SERVER
objref:
kind: ConfigMap
name: vsphere-configs
apiVersion: v1
fieldref:
fieldpath: data.VSPHERE_SERVER
- name: VSPHERE_NETWORK
objref:
kind: ConfigMap
name: vsphere-configs
apiVersion: v1
fieldref:
fieldpath: data.VSPHERE_NETWORK
- name: VSPHERE_DATACENTER
objref:
kind: ConfigMap
name: vsphere-configs
apiVersion: v1
fieldref:
fieldpath: data.VSPHERE_DATACENTER
- name: VSPHERE_DATASTORE
objref:
kind: ConfigMap
name: vsphere-configs
apiVersion: v1
fieldref:
fieldpath: data.VSPHERE_DATASTORE
configurations:
- varref.yaml Benefits:
Also, other customizations like #699 could easily be addressed through Overlay Patches Any thoughts- has this been tried already or is it worth attempting? |
This CAPI issue extend the support for custom user templates kubernetes-sigs/cluster-api#2133; this can work with any method used for template generation (kustomize, manifest generator or else) |
/assign @randomvariable |
Also, small rewrites for clarity Signed-off-by: Naadir Jeewa <[email protected]>
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
closing this as it's supported now |
/kind feature
Incorporate kube-rbac-proxy image registry location into manifest generation and templates.
The container image registry locations for CAPI, CABPK(manager) and CAPV are configurable through envars.txt as part of manifest generation.
CABPK kube-rbac-proxy is however not configurable. One can find/replace the templates manually but this detracts from the usability and creates a little confusion for customers with air gapped environments that use private registries.
Environment:
kubectl version
): 1.16.2/etc/os-release
): CentOS 7The text was updated successfully, but these errors were encountered: