Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Explicit LoadBalancing Configurations for HTTPBackendRefs #992

Closed
shaneutt opened this issue Jan 13, 2022 · 6 comments
Closed

Explicit LoadBalancing Configurations for HTTPBackendRefs #992

shaneutt opened this issue Jan 13, 2022 · 6 comments
Labels
kind/feature Categorizes issue or PR as related to a new feature.

Comments

@shaneutt
Copy link
Member

shaneutt commented Jan 13, 2022

What would you like to be added:

Clear, explicit and top-level configuration for the load-balancing strategy when using multiple HTTPBackendRefs in an HTTPRoute ruleset.

Why this is needed:

Currently only a "weighted" load-balancing strategy is supported, and it's effectively implicit (rather than ideally explicit) in configuration currently. There is no way to define other strategies, or for implementations to have their own specific strategies available.

@shaneutt shaneutt added the kind/feature Categorizes issue or PR as related to a new feature. label Jan 13, 2022
@howardjohn
Copy link
Contributor

One thing to consider is if this is a property of the route or of the backend itself (eg a Policy attached to Service)

@shaneutt
Copy link
Member Author

shaneutt commented Jan 13, 2022

I'm writing up a GEP right now, which I would appreciate your feedback on but I basically considered this and my current inclination is that this is a property adjacent to the list of refs.

@howardjohn
Copy link
Contributor

I think Route is the wrong place for this - or at least the wrong place to be the exclusive location. if its a Policy, then it can attach at different points. I think the most important points are Service and global. For example, global is useful if you want to say "All services should use the 'failover to another region' load balance strategy". Probably not all your services in the cluster need to define that. Another example would be a cluster admin could decide they like Round Robin by default, but a specific service needs sticky load balancer or something.

Briefly looking at some proxies, it seems most are putting things at the backend level?:
Google: https://cloud.google.com/compute/docs/reference/rest/v1/backendServices
Envoy: https://www.envoyproxy.io/docs/envoy/latest/api-v3/config/cluster/v3/cluster.proto.html?highlight=load
Possibly nginx (not as familiar): https://docs.nginx.com/nginx/admin-guide/load-balancer/http-load-balancer/
haproxy: https://www.digitalocean.com/community/tutorials/an-introduction-to-haproxy-and-load-balancing-concepts

One implementation note for those proxies is that you end up needing to create extra backends on the order of backends * number of routes configuring backend properties. This can explode to very large numbers

@shaneutt
Copy link
Member Author

I get what you're saying. For the moment I've thrown up a draft PR:

#993

I would appreciate your early feedback on that, and if you'd like to collaborate directly with me on that branch I would be all for it.

@hbagdi
Copy link
Contributor

hbagdi commented Jan 13, 2022

xref #611 #196 #98

@shaneutt shaneutt removed their assignment Jan 24, 2022
@shaneutt
Copy link
Member Author

We talked about this in the community sync today, and along with the feedback on the draft PR: policy objects are the mechanism we have today to solve for this and after discussion and highlighting a lot of the complexities with trying to add something simpler I am convinced that using policy is the right approach for now. I consider this closed but if someone else thinks on some other ways in which we could implement this and wants to re-open this I would be interested to see.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/feature Categorizes issue or PR as related to a new feature.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants