Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

External Ip and Node port not accessible in Kubernetes on Docker for windows #1950

Closed
dheerajjoshim opened this issue Apr 16, 2018 · 19 comments

Comments

@dheerajjoshim
Copy link

dheerajjoshim commented Apr 16, 2018

I am running docker for windows Edge channel to experiment with kubernetes in docker.
I am having a service running in kubernetes

PS C:\WINDOWS\system32> kubectl.exe get services
NAME         TYPE           CLUSTER-IP       EXTERNAL-IP   PORT(S)          AGE
keycloak     LoadBalancer   10.103.61.133    localhost     8665:31492/TCP   2d

Service description

PS C:\WINDOWS\system32> kubectl.exe describe service keycloak
Name:                     keycloak
Namespace:                default
Labels:                   app=keycloak
Annotations:              <none>
Selector:                 app=keycloak,tier=security
Type:                     LoadBalancer
IP:                       10.103.61.133
LoadBalancer Ingress:     localhost
Port:                     <unset>  8665/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  31492/TCP
Endpoints:                10.1.0.58:8080
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>

As you can see the IP is 10.103.61.133 and Node-Port is 31492
So I would expect http://10.103.61.133:31492 to work. But 10.103.61.133 is not accessible at all.
And http://localhost:8665 is accessible. Ins't http://CLUSTER_IP:NODE_PORT be accessible?

Is it problem with windows port mapping? I tried adding a firewall rule for port 31492. But no luck

  • Windows Version: Windows 10 Enterprise - Version 1607
  • Docker for Windows Version: 18.04.0-ce-rc2

Expected behavior

http://CLUSTER_IP:NODE_PORT be accessible

Actual behavior

http://EXTERNAL_IP:PORT is accessible

@jasonbivins
Copy link

Hi @dheerajjoshim
Can you try again and post a diagnostic ID ?

@honcao
Copy link

honcao commented Apr 17, 2018

I have the similar issue:

azureuser@k8s-master-26985512-0:~$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
26985k8s9000 Ready 1h v1.9.6-7+fc1e222311cd03-dirty
k8s-master-26985512-0 NotReady master 1h v1.9.5-beta.0.77+fc1e222311cd03

azureuser@k8s-master-26985512-0:~$ kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE
win-webserver-68c499b5c4-dfhbn 1/1 Running 0 32m 10.244.1.25 26985k8s9000

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
win-webserver LoadBalancer 10.0.6.47 192.168.102.51 80:30726/TCP 1h

Master IP is 10.240.255.5, access the node port on the master is fine.
azureuser@k8s-master-26985512-0:~$ curl 10.240.255.5:30726

Windows Container Web Server

IP 10.244.1.25 callerCount 7

Login on to the Node which ip is 10.140.0.4 and get the following error:

PS C:\Users\azureuser> curl 10.140.0.4:30726
curl : Unable to connect to the remote server
At line:1 char:1

  • curl 10.140.0.4:30726
  •   + CategoryInfo          : InvalidOperation: (System.Net.HttpWebRequest:HttpWebRequest) [Invoke-WebRequest], WebExc
     eption
      + FullyQualifiedErrorId : WebCmdletWebResponseException,Microsoft.PowerShell.Commands.InvokeWebRequestCommand
    

@dheerajjoshim
Copy link
Author

Diagnostic ID
EEFFC398-6258-4933-9006-7C1798D6B525/2018-04-17_08-07-41

@dheerajjoshim
Copy link
Author

Removed all pods. And re did everything again.
Here is the diagnostic ID

EEFFC398-6258-4933-9006-7C1798D6B525/2018-04-17_08-43-58

@dheerajjoshim
Copy link
Author

Any help is appreciated. I am kinda stuck here.

@chaima-ennar
Copy link

Did any one find the solution to this issue?

@dheerajjoshim
Copy link
Author

dheerajjoshim commented Jun 15, 2018 via email

@calebpalmer
Copy link

I am also having this issue.

@bartvanhoutte
Copy link

+1

@schristoff
Copy link

I am also having this issue. Please review!

@h3nryza
Copy link

h3nryza commented Aug 10, 2018

Was facing the same issue on Centos 7, however this fixed it. I was using a custom IP and range for flannel:
Example:
kubeadm init --pod-network-cidr 192.168.0.0/32

Looks like using a /16 is what it wants
kubeadm init --pod-network-cidr 10.244.0.0/16

To reset I did the following

kubeadm reset
systemctl stop kubelet
systemctl stop docker
rm -rf /var/lib/cni/
rm -rf /var/lib/kubelet/*
rm -rf /etc/cni/
ifconfig cni0 down
ifconfig flannel.1 down
ifconfig docker0 down
ip link delete cni0
ip link delete flannel.1
systemctl start kubelet
systemctl start docker

kubeadm init --pod-network-cidr 10.244.0.0/16

@docker-robott
Copy link
Collaborator

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale comment.
Stale issues will be closed after an additional 30d of inactivity.

Prevent issues from auto-closing with an /lifecycle frozen comment.

If this issue is safe to close now please do so.

Send feedback to Docker Community Slack channels #docker-for-mac or #docker-for-windows.
/lifecycle stale

@giggio
Copy link

giggio commented Nov 19, 2018

/remove-lifecycle stale

@rdalbuquerque
Copy link

In my case. I had installed nginx ingress to solve this problem. I had to use kubernets/nginx-ingress.

On Fri 15 Jun, 2018, 9:15 PM chaima-ennar, @.***> wrote: Did any one find the solution to this issue? — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub <#1950 (comment)>, or mute the thread https://github.com/notifications/unsubscribe-auth/AA5mdeeceyy3bHqkbOBBzNj4SevfZCv9ks5t89aRgaJpZM4TV0jE .

Hi, I'm facing the same issue and tried to use nginx-ingress to solve it too but still isn't working, can u explain how u did it? maybe posting the yaml? thanks!

@dheerajjoshim
Copy link
Author

Nginx default backend

kind: Service
apiVersion: v1
metadata:
  name: nginx-default-backend
spec:
  ports:
  - port: 80
    targetPort: http
  selector:
    app: nginx-default-backend
 
 
---
 
 
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
  name: nginx-default-backend
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: nginx-default-backend
    spec:
      terminationGracePeriodSeconds: 60
      containers:
      - name: default-http-backend
        image: gcr.io/google_containers/defaultbackend:1.4
        livenessProbe:
          httpGet:
            path: /healthz
            port: 8080
            scheme: HTTP
          initialDelaySeconds: 30
          timeoutSeconds: 5
        resources:
          limits:
            cpu: 10m
            memory: 20Mi
          requests:
            cpu: 10m
            memory: 20Mi
        ports:
        - name: http
          containerPort: 8080
          protocol: TCP

Nginx Ingress controller

kind: Service
apiVersion: v1
metadata:
  name: ingress-nginx
spec:
  type: LoadBalancer
  selector:
    app: ingress-nginx
  ports:
  - name: http
    port: 80
    targetPort: http
  - name: https
    port: 443
    targetPort: https
 
 
---
 
 
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
  name: ingress-nginx
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: ingress-nginx
    spec:
      terminationGracePeriodSeconds: 60
      containers:
      - image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.14.0
        name: ingress-nginx
        imagePullPolicy: Always
        ports:
          - name: http
            containerPort: 80
            protocol: TCP
          - name: https
            containerPort: 443
            protocol: TCP
        livenessProbe:
          httpGet:
            path: /healthz
            port: 10254
            scheme: HTTP
          initialDelaySeconds: 30
          timeoutSeconds: 5
        env:
          - name: POD_NAME
            valueFrom:
              fieldRef:
                fieldPath: metadata.name
          - name: POD_NAMESPACE
            valueFrom:
              fieldRef:
                fieldPath: metadata.namespace
        args:
        - /nginx-ingress-controller
        - --default-backend-service=$(POD_NAMESPACE)/nginx-default-backend

This is the configuration i had used.

@ghost
Copy link

ghost commented Feb 23, 2019

No, ClusterIP is a psudo address which can be used in POD network. It is not a node's address.

And Node is the VM running on Hyper-V, not Windows host.
You can confirm by "netstat.exe -na | findstr 31492" and
"curl.exe NODEADDRESS:31492" on Windows host.
Here, NODEADDRESS is the address near(may be the next of) the address of the DockerNAT interface's address of Windows host.

This is the same as minikube in Linux.
Although minikube provides the way to get the address of Node VM by
"minikube service SERVICENAME --url".

@docker-robott
Copy link
Collaborator

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale comment.
Stale issues will be closed after an additional 30d of inactivity.

Prevent issues from auto-closing with an /lifecycle frozen comment.

If this issue is safe to close now please do so.

Send feedback to Docker Community Slack channels #docker-for-mac or #docker-for-windows.
/lifecycle stale

@OneBlueBird
Copy link

I am trying with minikube on Windows 10. I am able to create replication controller, pod, service successfully. I don't know how to access a simple nodejs app from my windows chrome browser! Please help.

@docker-robott
Copy link
Collaborator

Closed issues are locked after 30 days of inactivity.
This helps our team focus on active issues.

If you have found a problem that seems similar to this, please open a new issue.

Send feedback to Docker Community Slack channels #docker-for-mac or #docker-for-windows.
/lifecycle locked

@docker docker locked and limited conversation to collaborators Jun 27, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests