-
Notifications
You must be signed in to change notification settings - Fork 239
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How do you deploy kcc to a specific node-pool? #230
Comments
Hi @lynncyrin, to understand your use case better, can you explain a little the way that you plan to use KCC in multiple clusters? How are you going to distribute your GCP resources among those clusters? Just so you know, you can use one KCC to manager GCP resources in multiple project: https://cloud.google.com/config-connector/docs/how-to/managing-multiple-projects |
I plan on using kcc on a single cluster, with multiple node-pools. The node-pools themselves don't have the resources distributed in any particular fashion, although the will likely be configured differently (ex. one with workload identity, and the other without). I don't have a setup where I need and utilize a multiple-project setup. So to summarize, it's:
|
Thank you for the clarification, @lynncyrin . In your original post, you mentioned that If what you meant was that you plan to install KCC in each existing cluster you have, have you thought about using one KCC instance to manage all the other clusters? |
Hey @lynncyrin , wanted to make sure we followed up to at least be able to unblock you. We don't officially support running the KCC workload on only a particular node pool, but it's technically possible by editing our |
labeling as an enhancement as we don't support such functionality today (without hand-editing our manifests, which we do not recommend). |
I am facing the same issue but my scenario is different, more real. According to documentation - https://cloud.google.com/kubernetes-engine/docs/how-to/isolate-workloads-dedicated-nodes - I am creating dedicated node pool for GKE internal workloads. And I realized that I cannot move config connector (self installed) to specific node pool. I added tolerations to Config Connector Operator stateful set but it was not propagated into child resources in cnrm-system namespace. I am not interested in editing manifests manually. |
Wanted to add to this that when creating a GKE cluster and installing config-connector as an add-on, it is the only component that doesn't set a toleration for the |
I'm interested in this too,. One way might be to allow the |
(apologies in advance for this being more-so a k8s question than a kcc question!)
Backstory (not required reading)
I'm trying to activate kcc and workload identity on an existing cluster, and I predict that I will need to do so again many times in the future. So I'm trying to use kcc itself for this as much as possible, since I generally find tools like it to be much more stable than a random assortment bash scripts. There's a very specific snag though, in that the existing cluster already has as a running esp (https://github.com/cloudendpoints/esp) instance and that instance crashes when you try to activate workload identity on its node. This is making it so that I cannot "simply" activate workload identity on the cluster, which is making my cluster setup process more complicated.
What I would like to do
There's many many ways I could address my problem, but what I would like to do is:
This would require some way to tell k8s to use the secondary node-pool for the kcc deploy, is there some obvious way to do that?
The text was updated successfully, but these errors were encountered: