Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

TCP Proxy Not Listening (tcp-services) #4213

Closed
bitva77 opened this issue Jun 19, 2019 · 21 comments
Closed

TCP Proxy Not Listening (tcp-services) #4213

bitva77 opened this issue Jun 19, 2019 · 21 comments
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@bitva77
Copy link

bitva77 commented Jun 19, 2019

Is this a BUG REPORT or FEATURE REQUEST? (choose one):

BUG REPORT

NGINX Ingress controller version:

quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.24.1

Kubernetes version (use kubectl version):

Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.2", GitCommit:"66049e3b21efe110454d67df4fa62b08ea79a19b", GitTreeState:"clean", BuildDate:"2019-05-16T16:23:09Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}

Server Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.0", GitCommit:"ddf47ac13c1a9483ea035a79cd7c10005ff21a6d", GitTreeState:"clean", BuildDate:"2018-12-03T20:56:12Z", GoVersion:"go1.11.2", Compiler:"gc", Platform:"linux/amd64"}

Environment:

Baremetal: kubeadm install on RedHat 7.6

What happened:

Created a TCP proxy and the port is not being listened on.

What you expected to happen:

TCP port to be exposed.

How to reproduce it (as minimally and precisely as possible):

  1. Controller installed via the mandatory YAML file in the docs.

  2. Nginx Service created like so

---
apiVersion: v1
kind: Service
metadata:
  name: ingress-nginx
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
spec:
  type: NodePort
  ports:
    - name: http
      port: 80
      targetPort: 80
      protocol: TCP
    - name: https
      port: 443
      targetPort: 443
      protocol: TCP
    - name: logstash-port-9615
      port: 9615
      targetPort: 9615
      protocol: TCP
  selector:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---
  1. Have an Application service running:
logstash-service   ClusterIP   10.106.112.104   <none>        9615/TCP   48m
  1. tcp-services configured like so:
apiVersion: v1
kind: ConfigMap
metadata:
  name: tcp-services
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
data:
  9615: "logstash/logstash-service:9615"

namespace is correct.

Anything else we need to know:

Logs:

-------------------------------------------------------------------------------
NGINX Ingress controller
  Release:    0.24.1
  Build:      git-ce418168f
  Repository: https://github.com/kubernetes/ingress-nginx
-------------------------------------------------------------------------------

W0619 21:58:39.281513       6 flags.go:214] SSL certificate chain completion is disabled (--enable-ssl-chain-completion=false)
nginx version: nginx/1.15.10
W0619 21:58:39.284806       6 client_config.go:549] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
I0619 21:58:39.284992       6 main.go:205] Creating API client for https://10.96.0.1:443
I0619 21:58:39.292838       6 main.go:249] Running in Kubernetes cluster version v1.13 (v1.13.0) - git (clean) commit ddf47ac13c1a9483ea035a79cd7c10005ff21a6d - platform linux/amd64
I0619 21:58:39.495093       6 main.go:124] Created fake certificate with PemFileName: /etc/ingress-controller/ssl/default-fake-certificate.pem
I0619 21:58:39.512678       6 nginx.go:265] Starting NGINX Ingress controller
I0619 21:58:39.517114       6 event.go:209] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"nginx-configuration", UID:"11ebd92e-92d7-11e9-a2da-0050569d3226", APIVersion:"v1", ResourceVersion:"2269814", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/nginx-configuration
I0619 21:58:39.520390       6 event.go:209] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"udp-services", UID:"1224cc7e-92d7-11e9-a2da-0050569d3226", APIVersion:"v1", ResourceVersion:"2269816", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/udp-services
I0619 21:58:39.520420       6 event.go:209] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"tcp-services", UID:"1207f437-92d7-11e9-a2da-0050569d3226", APIVersion:"v1", ResourceVersion:"2273791", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/tcp-services
I0619 21:58:40.713729       6 nginx.go:311] Starting NGINX process
I0619 21:58:40.713828       6 leaderelection.go:217] attempting to acquire leader lease  ingress-nginx/ingress-controller-leader-nginx...
I0619 21:58:40.714474       6 controller.go:170] Configuration changes detected, backend reload required.
I0619 21:58:40.716560       6 status.go:86] new leader elected: nginx-ingress-controller-689498bc7c-5sb9h
I0619 21:58:40.807289       6 controller.go:188] Backend successfully reloaded.
I0619 21:58:40.807330       6 controller.go:202] Initial sync, sleeping for 1 second.
E0619 21:58:41.605302       6 checker.go:57] healthcheck error: 500
[19/Jun/2019:21:58:41 +0000]TCP200000.000
[19/Jun/2019:21:58:57 +0000]TCP200000.000
I0619 21:59:30.943330       6 leaderelection.go:227] successfully acquired lease ingress-nginx/ingress-controller-leader-nginx
I0619 21:59:30.943592       6 status.go:86] new leader elected: nginx-ingress-controller-689498bc7c-5kb28

I've tried various combinations of PROXY:PROXY ::PROXY :PROXY as well but nada.

It's close to #3984 however in that one the proxy seems to actually happen. I'm not even getting that far. netstat -an |grep 9615 is empty.

@Zempashi
Copy link

I ran into the same issue, and the problem was only that nginx don't reload automatically when changing the tcp config. Deleting the pod fixed the problem

@becrespi
Copy link

I have the same probleme the configmap tcp-services don't seems to be used. Morever in the yml for the daemonset there are an arg for the nginx config (-nginx-configmaps=$(POD_NAMESPACE)/nginx-config) but nothing about the config map tcp-services. Some old tutorial like here or here have the arg --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services but now it don't seems to exist anymore.
Someone can explain the reason ? 

@luizportela
Copy link

Changing the ingress-nginx "LoadBalancer" annotation worked for me:

service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "tcp"

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Nov 17, 2019
@holmesb
Copy link

holmesb commented Nov 28, 2019

@bitva77 did you fix this?

@yizha
Copy link

yizha commented Dec 5, 2019

Similar issue here and updating some ingress rule fixed it. Looks like editing the tcp-services configmap by adding/removing the 'PROXY' field(s) doesn't trigger a re-generation of the nginx.conf file.

Setup:

  1. AWS/EKS with proxy-protocol enabled on ELB (classic load balancer and all listeners are TCP)
  2. ingress nginx with the 'use-proxy-protocol: "true"' in the 'nginx-configuration' configmap

Steps to reproduce:

  1. edit the 'tcp-services' configmap to add a tcp .service 8000: namespace/service:8000.
  2. edit the nginx-controller service to add a port (port:8000 --> targetPort:8000) for the tcp service in step1
  3. check /etc/nginx/nginx.conf in nginx controller pod and confirm it contains a 'server' block with correct listen 8000; directive for the tcp/8000 service.
  4. edit the 'tcp-services' configmap again to add the proxy-protocol decode directive and now the k/v for the tcp/8000 service becomes 8000: namespace/service:8000:PROXY
  5. check /etc/nginx/nginx.conf in nginx controller pod and there isn't any change comparing that from step3, it is still listen 8000;
  6. edit some ingress rule (make some change like updating the host)
  7. check /etc/nginx/nginx.conf in nginx controller pod again and now the listen directive for the tcp/8000 service becomes listen 8000 proxy_protocol; which is correct.

@dorsany
Copy link

dorsany commented Dec 22, 2019

as @yizha said, you need to update 2 places (service and configmap) in order to open new tcp port.
no way to develop nginx-ingress operator that can watch the ingress-nginx-nginx-ingress-tcp and if it has been changed it will update the ingress-nginx-nginx-ingress-controller service?

@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jan 21, 2020
@sagimann
Copy link

I followed the above steps but still no luck. Service has an exposed port 5671 (rabbitmq) and I appied a tcp-services yaml on top of my existing ingress-nginx namespace. Then I went into the ingress-nginx container to check nginx.conf, I see something strange at the "stream" section - as if the upstream server did not get configured, left as a placeholder:

        upstream upstream_balancer {
                server 0.0.0.1:1234; # placeholder
                balancer_by_lua_block {
                        tcp_udp_balancer.balance()
                }
        }
...
        # TCP services
        server {
                preread_by_lua_block {
                        ngx.var.proxy_upstream_name="tcp-dev-rabbitmq-rabbitmq-ha-5671";
                }
                listen                  5671;
                proxy_timeout           600s;
                proxy_pass              upstream_balancer;
        }

@Sanghren
Copy link

I followed the above steps but still no luck. Service has an exposed port 5671 (rabbitmq) and I appied a tcp-services yaml on top of my existing ingress-nginx namespace. Then I went into the ingress-nginx container to check nginx.conf, I see something strange at the "stream" section - as if the upstream server did not get configured, left as a placeholder:

        upstream upstream_balancer {
                server 0.0.0.1:1234; # placeholder
                balancer_by_lua_block {
                        tcp_udp_balancer.balance()
                }
        }
...
        # TCP services
        server {
                preread_by_lua_block {
                        ngx.var.proxy_upstream_name="tcp-dev-rabbitmq-rabbitmq-ha-5671";
                }
                listen                  5671;
                proxy_timeout           600s;
                proxy_pass              upstream_balancer;
        }

I have the same behaviour here :/

@sudobhat
Copy link

I have the same behavior. I am using AWS load balancer with multiple services deployed in the cluster, the http/s routes are working (80/443) but TCP is not working.

I have a mosquitto runing in the cluster which is using TCP port 8883 (which is configured in the nginx tcp-services configmap and service).
But, If I publish, I get a confusing message which says "Error:Success".

Bug?

@rivernews
Copy link

rivernews commented Mar 27, 2020

@sagimann @tbrunain @sudobhat

Same as you guys, my nginx.conf looks exactly the same as yours except the port & svc name. When I try to connect to the tcp service, in my case, 6379 for redis, my redis-cli still fails and timeout. I looked at the log of nginx controller, when I fire redis-cli, I don't see any new log and looks like that route just doesn't exist.

But if you looked at the bottom of this oracle article, it states that "The upstream is proxying via Lua." and seems to acknowledge that the # placeholder comment we see in nginx.conf is expected.

Furthermore when I look at some previous section of nginx.conf I see this comment:

	upstream upstream_balancer {
		### Attention!!!
		#
		# We no longer create "upstream" section for every backend.
		# Backends are handled dynamically using Lua. If you would like to debug
		# and see what backends ingress-nginx has in its memory you can
		# install our kubectl plugin https://kubernetes.github.io/ingress-nginx/kubectl-plugin.
		# Once you have the plugin you can use "kubectl ingress-nginx backends" command to
		# inspect current backends.
		#
		###
		
		server 0.0.0.1; # placeholder
...

So looks like nginx-ingress is no longer using stream, at least for tcp service, it uses lua instead. Looking at CHANGELOG perhaps this change is introduced around v0.21. But to be honest I'm not familiar with Lua, so I need some more investigation as well. Hope this can shed some lights on the issue, let me know if any of you have figured out the problem.

@rivernews
Copy link

rivernews commented Mar 31, 2020

Just wanna report back -
The above setting (create configmap for ingress controller, setup port: namespace/service-name:port) did work, but I found out that I forgot to open that port on my firewall. That's it. Debugged for days and this.

So the placeholder is not of concern here (since nginx-ingress seem to use lua instead of upstream here), it's probably something else that prevents you from reaching to that port inside nginx. This post gives some tips to debug such case which you might find helpful.

@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@nrvmodi
Copy link

nrvmodi commented May 4, 2020

I was using image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.31.1.

I have the same problem and I found that I had missed --tcp-services-configmap option in ingress-nginx-controller pod. After providing those option like below and it works fine.

- --tcp-services-configmap=ingress-nginx/tcp-services

I had also tested new port reloading in tcp-services.yaml file. So for that you need to compulsary apply tcp-services.yaml and nginx-ingress-controller.yaml file once again to take impact of new port in nginx-ingress-controller pod.

@marcusjwhelan
Copy link

@nrvmodi where do you set that --tcp-services-configmap? Do you so that when installing nginx-ingress from helm? If its not there where and when do you set that argument? If it is already installed where does that go? Also where can you find the information to find out where this would go?

@rivernews
Copy link

@marcusjwhelan if you're using helm, you can take a look at this comment from Helm issue. Basically, you just need to set the tcp in the format mentioned in the helm issue comment when installing the controller using Helm CLI. Like this: helm install ... --set tcp.8080="...". The latest helm chart for nginx-ingress will create the tcp service configmap for you.

The tcp-services-configmap seems to be a nginx arg. Based on this post, you'll use this if you're creating the ingress controller by a kubernetes deployment yourself:

apiVersion: apps/v1
kind: Deployment
...
spec:
  template:
    spec:
      containers:
      - image: <nginx ingress controller image you want to use>
        args:
        - --tcp-services-configmap=...  <---- here you go

Personally I'l go for Helm because it's much simpler.

@marcusjwhelan
Copy link

@rivernews Would it be fine to just patch the deployment since I followed the ingress-nginx install of https://kubernetes.github.io/ingress-nginx/deploy/#azure which is for azure? Reading this https://minikube.sigs.k8s.io/docs/tutorials/nginx_tcp_udp_ingress/ and this https://skryvets.com/blog/2019/04/09/exposing-tcp-and-udp-services-via-ingress-on-minikube/ I get the idea that I don't need to create a nodeport/clusterip for the nodes? It will automatically just connect to the ports of the pods I am creating. Both are on port 25565 so i need a way to route to each but on a different port. Or do I need to create a nodeport/clusterIp for each pod so I can specify a different port. Or how does that work exactly.

@rivernews
Copy link

rivernews commented May 5, 2020 via email

@Varun-garg
Copy link

Varun-garg commented Jul 14, 2021

something like this works for helm with values.yaml

controller:
  extraArgs:
    tcp-services-configmap: $(POD_NAMESPACE)/my-tcp-services
    udp-services-configmap: $(POD_NAMESPACE)/my-udp-services

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests