Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Incorporate kube-rbac-proxy image registry location into manifest generation #675

Closed
MnrGreg opened this issue Dec 2, 2019 · 13 comments
Closed
Assignees
Labels
kind/feature Categorizes issue or PR as related to a new feature. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@MnrGreg
Copy link
Contributor

MnrGreg commented Dec 2, 2019

/kind feature

Incorporate kube-rbac-proxy image registry location into manifest generation and templates.

The container image registry locations for CAPI, CABPK(manager) and CAPV are configurable through envars.txt as part of manifest generation.

export CABPK_MANAGER_IMAGE=""
export CAPI_MANAGER_IMAGE=""
export CAPV_MANAGER_IMAGE=""

CABPK kube-rbac-proxy is however not configurable. One can find/replace the templates manually but this detracts from the usability and creates a little confusion for customers with air gapped environments that use private registries.

Environment:

  • Cluster-api-provider-vsphere version: 0.5.3
  • Kubernetes version: (use kubectl version): 1.16.2
  • OS (e.g. from /etc/os-release): CentOS 7
@k8s-ci-robot k8s-ci-robot added the kind/feature Categorizes issue or PR as related to a new feature. label Dec 2, 2019
@akutz
Copy link
Contributor

akutz commented Dec 16, 2019

Hi @MnrGreg,

So how you like to solve this solution? You seem ambivalent to include the option, but at the same time you opened this issue?

@akutz
Copy link
Contributor

akutz commented Jan 10, 2020

Related to kubernetes-sigs/cluster-api#1767

@akutz
Copy link
Contributor

akutz commented Jan 10, 2020

ping @MnrGreg
cc @yastij

@akutz
Copy link
Contributor

akutz commented Jan 10, 2020

Is this an issue in v1alpha2 or v1alpha3? If v1alpha3, can @fabriziopandini please chime in and kindly remind me if clusterctl v2 has the ability to apply JSON patches?

@fabriziopandini
Copy link
Member

@akutz the JSON patches are applied by kustomize during the manifest generation.
clusterctl v2 uses the generated manifest

@akutz akutz added this to the v1alpha3 milestone Jan 10, 2020
@akutz
Copy link
Contributor

akutz commented Jan 10, 2020

@akutz the JSON patches are applied by kustomize during the manifest generation.
clusterctl v2 uses the generated manifest

Does that mean clusterctl itself doesn't support applying any type of patches to the YAML to which it points? Like kubectl has built-in support for kustomize in order to apply patches?

The point is that this person wants to use a custom version of kube-rbac-proxy, but since that is used by more than CAPV, even if we supported generating YAML with a custom version of kube-rbac-proxy, it would need to be true for all CAPI components that clusterctl installed as well.

It seems to me that if clusterctl v2 is being offered as a solution for deploying CAPI, it can't assume static components sources, or even component sources that have been pre-created with variables to be replaced, and should offer the ability to apply some measure of customization, such as with kustomize and JSON patches.

The CAPV manifest generator is very nice for users, but in v1alpha3 the manifest generator is at odds with clusterctl and CAPI, thus we're discussing whether or not to drop support for the manifest generator. But in doing so it seems like our users will be subject to a step backwards in functionality if they can't apply customizations to the YAML they're using to deploy clusters.

@fabriziopandini
Copy link
Member

should offer the ability to apply some measure of customization, such as with kustomize and JSON patches

@akutz I got your point and I kind of like the idea, so, if you/the user can open the issue in the CAPI repo we can rally on defining details (e.g. how to pass the patches to clusterctl/how to define to which providers the patch should apply)

But just to set the expectations, from my point of view this is on top of was discussed with the CAEP/with the POC, so, unless someone steps in to help, I will jump on this at best effort/after everything else is completed.

The CAPV manifest generator is very nice for users ...

Nothing prevents the users to use the manifest generator that exists in CAPV and pass the generated manifest to clusterctl as a local override.

Quoting from kubernetes-sigs/cluster-api#1994
"If, for any reasons, the user wants to replace the assets available on a provider repository with a locally available asset, the user is required to save the file under $HOME\cluster-api\overrides<provider-name><version><file-name.yaml>."

@MnrGreg
Copy link
Contributor Author

MnrGreg commented Feb 2, 2020

@akutz @fabriziopandini While the Manifest Generator worked well initially, in our use-case there are quite a few additional, common settings we need apply to the Workload clusters. We found ourselves applying Kustomize Patches on top of the CAPV Manifest generated templates in order to address these.

Some examples:

  • adding company CA certs with Files
  • adding registry mirrors to CRI Toml with preKubeadmCommands
  • adding node-labels to nodeRegistration/kubeletExtraArgs
  • adding an ODIC provider to clusterConfiguration/apiServer/extraArgs
  • disabling SSH daemon with preKubeadmCommands

Would the provider overrides in $HOME/.cluster-api/overrides/<provider-name>/<version>/<file-name.yaml> take Kustomization files as input?
And would it apply to Workload Cluster deployments with clusterctl config cluster ...?

I was wondering if anyone had looked at the possibility of using just Kustomize for all the CAPV Workload Cluster templating. One could use varReferenceand ConfigMapGenerator to populate the variables from a single, user-maintained key-value text file- similar to the current envsubst and envvars.txt process. I expect it would require quite a bit of additional yaml but having the end users maintain just the single file would abstract away the complexity for them.

A brief example:

clustervars.env:

VSPHERE_SERVER=vsphereserver.domain.local
VSPHERE_DATACENTER=west-1-prod
VSPHERE_DATASTORE=west-1-prod-lun01
VSPHERE_NETWORK=rck3236trunk

kustomization.yaml:

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- cluster.yaml
configMapGenerator:
- name: vsphere-configs
  envs:
  - clustervars.env
patchesStrategicMerge:
- |-
  apiVersion: infrastructure.cluster.x-k8s.io/v1alpha2
  kind: VSphereCluster
  metadata:
    name: generic-cluster-1
    namespace: default
  spec:
    cloudProviderConfiguration:
      network:
        name: $(VSPHERE_NETWORK)
      workspace:
        datacenter: $(VSPHERE_DATACENTER)
        datastore: $(VSPHERE_DATASTORE)
        folder: vm
        resourcePool: 'Infrastructure/Resources'
        server: $(VSPHERE_SERVER)
    server: $(VSPHERE_SERVER)
vars:
  - name: VSPHERE_SERVER
    objref:
      kind: ConfigMap
      name: vsphere-configs
      apiVersion: v1
    fieldref:
      fieldpath: data.VSPHERE_SERVER
  - name: VSPHERE_NETWORK
    objref:
      kind: ConfigMap
      name: vsphere-configs
      apiVersion: v1
    fieldref:
      fieldpath: data.VSPHERE_NETWORK
  - name: VSPHERE_DATACENTER
    objref:
      kind: ConfigMap
      name: vsphere-configs
      apiVersion: v1
    fieldref:
      fieldpath: data.VSPHERE_DATACENTER
  - name: VSPHERE_DATASTORE
    objref:
      kind: ConfigMap
      name: vsphere-configs
      apiVersion: v1
    fieldref:
      fieldpath: data.VSPHERE_DATASTORE
configurations:
  - varref.yaml

Benefits:

  1. End users still need only maintain a single key-value text file of necessary CAPV variables (similar to the existing envvars.txt)
  2. For those end users that are more familiar with Kustomize, they can add their own specific Kustomize Overlays on top, quite simply and easily (the main benefit/driver here)
  3. This saves a configuration step in bypassing the Manifest Generator
  4. Means Workload Cluster deployments need only the kubectl binary

Also, other customizations like #699 could easily be addressed through Overlay Patches

Any thoughts- has this been tried already or is it worth attempting?

@fabriziopandini
Copy link
Member

This CAPI issue extend the support for custom user templates kubernetes-sigs/cluster-api#2133; this can work with any method used for template generation (kustomize, manifest generator or else)

@yastij
Copy link
Member

yastij commented Feb 24, 2020

/assign @randomvariable

@yastij yastij modified the milestones: v1alpha3, Next Feb 24, 2020
jayunit100 pushed a commit to jayunit100/cluster-api-provider-vsphere that referenced this issue Feb 26, 2020
Also, small rewrites for clarity

Signed-off-by: Naadir Jeewa <[email protected]>
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 24, 2020
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jun 23, 2020
@yastij
Copy link
Member

yastij commented Jun 24, 2020

closing this as it's supported now

@yastij yastij closed this as completed Jun 24, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/feature Categorizes issue or PR as related to a new feature. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

7 participants