Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

istio ingress srvlb collides with nginx ingress controller #2

Open
gsfd2000 opened this issue Sep 10, 2021 · 4 comments
Open

istio ingress srvlb collides with nginx ingress controller #2

gsfd2000 opened this issue Sep 10, 2021 · 4 comments

Comments

@gsfd2000
Copy link

Hi Victor,
I am trying to follow your steps and have issues with the application istio ingress controller and the nginx ingress controller both deployed on the cluster. Do you have somewhere setup steps of nginx on the k3d cluster? Both claim per node (using DS) all 80/443 ingress slots so they collide, I wonder where that conflict avoidance has been configured. Short advise would be highly appreciated. Rgds

@vfarcic
Copy link
Owner

vfarcic commented Sep 10, 2021

I don't think I ever used both in k3d (in "real" clusters yes). When working locally, and when I do need Istio, I tend to redirect all the traffic through the Gateway. So I do not have ready-to-go instructions for doing that in k3d. But, the general Gist is that you'd need to change the Service ports of one of the two (e.g., nginx) to use different NodePort ports.

In "real" clusters, that is not a problem since those services are of LoadBalancer type. They get external LBs with different IPs so there is no port collision.

Short advice: check, for example, Helm values of NGINX ingress and see which ones should be modified to use different ports.

Longer advice: I can create a demo but I cannot guarantee when.

@gsfd2000
Copy link
Author

thx a lot for the fast response. If I understand right, the nginx controller has per node servicelbs deployed on 80/443 mapped to a nodeport and the istio-ingressgateway tries to do the same. Problem is both are helm charts from the internet and it seems that their deployment logic is daemonsets, so both always try to deploy to all nodes. I wonder how to either reduce their number to 2 each (four node cluster) or to map the istio ingress gateway to another port (kustomize patch does not work as yamls in /production folder are not deployed via kustomize (that could be changed though). The app stack argocd, argo-workflows and argo-events seems to be controlled by nginx ingress and only the custom app via istio (?). If that is the case, I need to keep both ingress setups of to reach all or kill one and redirect all to the remainer, that may require more changes. Is there any EASY way to patch the istio-ingress to two deployments or to overwrite its ports? Thx
kube-system ingress-controller-nginx-ingress-nginx-controller LoadBalancer 10.43.115.249 172.18.0.2,172.18.0.5 80:31642/TCP,443:31673/TCP 4h22m
istio-system istio-ingressgateway LoadBalancer 10.43.106.173 172.18.0.3,172.18.0.4 15021:30640/TCP,80:32403/TCP,443:31628/TCP 154m

[user@ossystem installation]$ kubectl get ds -A
NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
monitoring prometheus-node-exporter 4 4 4 4 4 3d2h
kube-system svclb-ingress-controller-nginx-ingress-nginx-controller 4 4 2 4 2 5h12m
istio-system svclb-istio-ingressgateway 4 4 2 4 2 3h24m

@gsfd2000
Copy link
Author

@vfarcic
Copy link
Owner

vfarcic commented Sep 10, 2021

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants