Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How do you deploy kcc to a specific node-pool? #230

Open
coilysiren opened this issue Jul 2, 2020 · 9 comments
Open

How do you deploy kcc to a specific node-pool? #230

coilysiren opened this issue Jul 2, 2020 · 9 comments
Labels
enhancement New feature or request

Comments

@coilysiren
Copy link


(apologies in advance for this being more-so a k8s question than a kcc question!)


Backstory (not required reading)

I'm trying to activate kcc and workload identity on an existing cluster, and I predict that I will need to do so again many times in the future. So I'm trying to use kcc itself for this as much as possible, since I generally find tools like it to be much more stable than a random assortment bash scripts. There's a very specific snag though, in that the existing cluster already has as a running esp (https://github.com/cloudendpoints/esp) instance and that instance crashes when you try to activate workload identity on its node. This is making it so that I cannot "simply" activate workload identity on the cluster, which is making my cluster setup process more complicated.

What I would like to do

There's many many ways I could address my problem, but what I would like to do is:

  1. deploy kcc + workload identity to a secondary node-pool
  2. use kcc to help activate workload identity on a primary node-pool

This would require some way to tell k8s to use the secondary node-pool for the kcc deploy, is there some obvious way to do that?

@coilysiren coilysiren added the question Further information is requested label Jul 2, 2020
@xiaobaitusi
Copy link
Contributor

Hi @lynncyrin, to understand your use case better, can you explain a little the way that you plan to use KCC in multiple clusters? How are you going to distribute your GCP resources among those clusters? Just so you know, you can use one KCC to manager GCP resources in multiple project: https://cloud.google.com/config-connector/docs/how-to/managing-multiple-projects

@coilysiren
Copy link
Author

I plan on using kcc on a single cluster, with multiple node-pools. The node-pools themselves don't have the resources distributed in any particular fashion, although the will likely be configured differently (ex. one with workload identity, and the other without). I don't have a setup where I need and utilize a multiple-project setup.

So to summarize, it's:

  • 1 cluster
  • 1 project
  • multiple node-pools

@maqiuyujoyce
Copy link
Collaborator

Thank you for the clarification, @lynncyrin . In your original post, you mentioned that I'm trying to activate kcc and workload identity on an existing cluster, and I predict that I will need to do so again many times in the future. Could you explain more about this? I assume you won't need to do it for more than once if you plan on having only a single cluster.

If what you meant was that you plan to install KCC in each existing cluster you have, have you thought about using one KCC instance to manage all the other clusters?

@kibbles-n-bytes
Copy link
Contributor

Hey @lynncyrin , wanted to make sure we followed up to at least be able to unblock you. We don't officially support running the KCC workload on only a particular node pool, but it's technically possible by editing our cnrm-controller-manager stateful set's pod template to include a node selector. For more information on doing so, here's the core Kubernetes documentation: Assigning Pods to Nodes. We're in the process of moving our installation to use a custom operator, at which point any edits to the stateful sets would be overwritten by our controller, so this would only work as a temporary workaround for our existing versions.

@toumorokoshi toumorokoshi added enhancement New feature or request and removed question Further information is requested labels Nov 25, 2020
@toumorokoshi
Copy link
Contributor

labeling as an enhancement as we don't support such functionality today (without hand-editing our manifests, which we do not recommend).

@mirkoszy
Copy link

I am facing the same issue but my scenario is different, more real.

According to documentation - https://cloud.google.com/kubernetes-engine/docs/how-to/isolate-workloads-dedicated-nodes - I am creating dedicated node pool for GKE internal workloads. And I realized that I cannot move config connector (self installed) to specific node pool. I added tolerations to Config Connector Operator stateful set but it was not propagated into child resources in cnrm-system namespace.

I am not interested in editing manifests manually.

@bison
Copy link

bison commented Jan 31, 2024

Wanted to add to this that when creating a GKE cluster and installing config-connector as an add-on, it is the only component that doesn't set a toleration for the components.gke.io/gke-managed-components taint. Even just doing that for "official" installs as an add-on would be a step towards allowing it to be grouped with other control-plane components.

@mmonaco
Copy link

mmonaco commented Sep 10, 2024

I'm interested in this too,. One way might be to allow the cnrm-system components to copy tolerations from the operator. Another would be to add tolerations to to what's supported by ControllerResource CRD.

@maqiuyujoyce
Copy link
Collaborator

Thank you for the feedback, @mmonaco ! @yuwenma fyi this is a feature that we've received a few requests about.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

8 participants