-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support custom HA proxy config in CAPD #7684
Comments
If I got it right what we are discussing is to make the CAPD load balancer acting as a load balancer for some other service. Have you considered addressing this with ingress/loadbalancer as documented in the kind book, those approaches should work with CAPD too (I'm assuming the service we are talking about is a regular K8s application)? As an alternative, what about having a runtime extension that hooks in the ClusterLife cycle and creates the additional load balancer? |
Adding |
/close |
@fabriziopandini: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Hey, @fabriziopandini, my apologies for the delayed response. I wanted to clarify that the service we discussed is not a k8s application. In the case of RKE2 control plane nodes, they run a server to register new nodes, listening on port 9345. Meanwhile, the K8S API server runs on port 6443. This means that both ports 6443 and 9345, need to be accessible for other nodes to connect. https://docs.rke2.io/install/requirements?_highlight=9345#inbound-network-rules. |
/reopen |
@alexander-demicev: Reopened this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
I always assumed that loadbalancers for workload are the responsibility of cloud providers and not of CAPI (as they are implementing Services of type LoadBalancer). Of coures CAPD is a special case, CAPA isn't though. |
@sbueringer I guess it depends on how you define the workload, is it a process that is running on a control plane machine, a containerized application, or both? Because in our case we are talking about a process that runs on VM/server. |
To add more context, in CAPA we will configure ingress rules in a way that only worker nodes will be able to access the control plane node using this specific port. This is more a cluster infrastructure thing than a cloud provider. |
Ah got it. Yeah "server to register new nodes" does not sound like the kind of workload I meant. For me it sounds fine to provide some sort of extension point to replace / append to the haproxy config. @killianmuldoon @chrischdi As you're more familiar with our haproxy setup, wdyt? |
Yeah - currently we define it in a go file - here https://github.com/kubernetes-sigs/cluster-api/blob/f78afa6eec25981b2979130c7f88bb8591429ef8/test/infrastructure/docker/internal/loadbalancer/config.go#L42-L41 Could definitely make this a template that could be fed into CAPD at start-time or from a configmap that another process could write to. |
I would prefer not having a separate ConfigMap, but I think either some controller arg or a field in the DockerCluster is reasonable. I guess a field in DockerCluster would be the "normal" way to support it. |
something like this should work:
// ConfigData is supplied to the loadbalancer config template.
type ConfigData struct {
ControlPlanePort int
BackendServers map[string]string
IPv6 bool
AdditionalConfig []AdditionalConfig
}
type AdditionalConfig struct {
Port int
Options []string
} I can open a PR if there are no objections |
From my point of view this seems to be similar to what e.g. CAPO provides at its I did not research in this case what other providers do here / have similar things. For CAPD I think this could fit well to somewhere in Something like
I think that could be implemented straight forward in the template then and (except if broken options get provided) won't break the haproxy itself. |
If we are going to go down this that I would prefer to keep things as simple as possible and avoiding to model the HA proxy config file into the CAPD API because then there will be chances that we will be asked to add more knobs in the future, which is something that could create noise for a feature that is not core to CAPD. Let's simply expose a simple plain text that will be appended to the default config + make sure to clearly document the contract about it: e.g. no validation, use at your own risk, etc. |
I don't have a strong opinion on this, any approach works for me |
Agreed with Fabrizio - I think this could be accomplished with a plain-text appended to the default config and by reloading that text from a configmap mounted as a volume to CAPD. |
I think I would prefer one additional field in DockerCluster vs. a command-line flag + ConfigMap mount with reload |
IMO we don't enable this in CAPI - i.e. we just introduce the flag and let whoever is using the feature downstream manage the configMap mount. |
We still have to implement the file reload if the file changes. Adding a field to the DockerCluster CR seems simpler and more consistent with all the other configuration options that we have in CAPD to me. |
I'm fine either way - but definitely leaning toward allowing reloading plain text config so we don't have to consider this down the line for additional haproxy config. |
What about a config map reference in CAPD cluster spec and we can allow setting it only during cluster creation? The controller can read it's content and append it to HA proxy config. LB configuration is updated only once per machine https://github.com/kubernetes-sigs/cluster-api/blob/main/test/infrastructure/docker/internal/controllers/dockermachine_controller.go#L270 so there won't be many request to read configmap and no reload will be needed. |
Does it have to be the append model? What happens if someone wants to change the behaviour of an existing section? Could we:
|
I'm ok with full replacement as well (it all boils up to being very explicit in documenting the contract) |
100% agree with you @fabriziopandini that we need to be explicit in documenting the contract. I can help in this area if any extra help is required (@alexander-demicev feel free to ping me). We've been working on an issue with the RKE2 control plane provider where we need a full replacement (instead of having to use a fork). The append only model wouldn't work for our scenario. To provide the specifics, we generally make these changes:
|
/triage accepted |
We are planning to use CAPD for testing RKE2 bootstrap provider. RKE2 runs a server alongside k8s API server, that server listens on port
9345
for registering new nodes. We'd like to be able to extend/provide a custom HA proxy config in CAPD.For rke2 it should look something like this:
Template from https://github.com/kubernetes-sigs/cluster-api/blob/main/test/infrastructure/docker/internal/third_party/forked/loadbalancer/config.go +
A possible option to achieve this is to read the custom template from a config map, process it using ConfigData structure from kind and append result to the existing configuration.
The downside of this approach is that it only works for our specific use case and won't work if someone would like to provide a completely custom ha proxy config.
cc @richardcase @belgaied2
/kind feature
The text was updated successfully, but these errors were encountered: