-
Notifications
You must be signed in to change notification settings - Fork 30
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How can I change graphd service type to LoadBalancer? currently ClusterIp hardcoded #134
Comments
Dear @porscheme, Nebula GraphClient/GraphD endpoint are load-balanced from the client-side, if you look into those clients, we specify all graphD addresses to the connetion_pool and it will round-robin from them. To me, the load balancer/api gateway pattern cannot be applied to the nebula graph. @MegaByte875 @veezhang @kqzh kindly add/correct if I understood it wrongly. Thanks! |
nebula-operator:v1.1.0
|
@porscheme https://github.com/vesoft-inc/nebula-operator/tree/master/config/samples we also provide graph-nodeport service and nginx-ingress, you can have a try. |
Now I understood that you would like to expose its endpoint rather than questioning on load balancing part, we should create individual services per each graphD instance (as I mentioned above, client-side load balancing) as @MegaByte875 shared in the samples. @MegaByte875, is it possible to provide those nodeport or ingress services per graphD by the operator itself? I believe this would be super helpful for users as the application layer in most cases is not in the same namespace or even cluster of NebulaClsuter. If it makes sense and doable, I would create the feature issue then :) |
Thaanks @wey-gu & @MegaByte875.
Can you make this part of the helm chart? |
@wey-gu Yes, this can be a feature @porscheme OK, I will make the service configurable by helm charts |
|
I will close the issue, please re-open it if you have other questions. |
Hi @wey-gu, how can we change graphd service type to LoadBalancer? currently ClusterIp hardcoded!
The text was updated successfully, but these errors were encountered: