From 4a30596f848b8cdc74de1179916a1e42534bc531 Mon Sep 17 00:00:00 2001 From: TAKAHASHI Shuuji Date: Thu, 23 Jul 2020 16:10:13 +0900 Subject: [PATCH 1/2] Copy tutorials/service/ directory from en/. --- content/ja/docs/tutorials/services/_index.md | 5 + .../ja/docs/tutorials/services/source-ip.md | 452 ++++++++++++++++++ 2 files changed, 457 insertions(+) create mode 100755 content/ja/docs/tutorials/services/_index.md create mode 100644 content/ja/docs/tutorials/services/source-ip.md diff --git a/content/ja/docs/tutorials/services/_index.md b/content/ja/docs/tutorials/services/_index.md new file mode 100755 index 0000000000000..4356a61fec714 --- /dev/null +++ b/content/ja/docs/tutorials/services/_index.md @@ -0,0 +1,5 @@ +--- +title: "Services" +weight: 70 +--- + diff --git a/content/ja/docs/tutorials/services/source-ip.md b/content/ja/docs/tutorials/services/source-ip.md new file mode 100644 index 0000000000000..03a9bb097c658 --- /dev/null +++ b/content/ja/docs/tutorials/services/source-ip.md @@ -0,0 +1,452 @@ +--- +title: Using Source IP +content_type: tutorial +min-kubernetes-server-version: v1.5 +--- + + + +Applications running in a Kubernetes cluster find and communicate with each +other, and the outside world, through the Service abstraction. This document +explains what happens to the source IP of packets sent to different types +of Services, and how you can toggle this behavior according to your needs. + + + +## {{% heading "prerequisites" %}} + + +### Terminology + +This document makes use of the following terms: + +{{< comment >}} +If localizing this section, link to the equivalent Wikipedia pages for +the target localization. +{{< /comment >}} + +[NAT](https://en.wikipedia.org/wiki/Network_address_translation) +: network address translation + +[Source NAT](https://en.wikipedia.org/wiki/Network_address_translation#SNAT) +: replacing the source IP on a packet; in this page, that usually means replacing with the IP address of a node. + +[Destination NAT](https://en.wikipedia.org/wiki/Network_address_translation#DNAT) +: replacing the destination IP on a packet; in this page, that usually means replacing with the IP address of a {{< glossary_tooltip term_id="pod" >}} + +[VIP](/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies) +: a virtual IP address, such as the one assigned to every {{< glossary_tooltip text="Service" term_id="service" >}} in Kubernetes + +[kube-proxy](/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies) +: a network daemon that orchestrates Service VIP management on every node + +### Prerequisites + +{{< include "task-tutorial-prereqs.md" >}} + +The examples use a small nginx webserver that echoes back the source +IP of requests it receives through an HTTP header. You can create it as follows: + +```shell +kubectl create deployment source-ip-app --image=k8s.gcr.io/echoserver:1.4 +``` +The output is: +``` +deployment.apps/source-ip-app created +``` + + + +## {{% heading "objectives" %}} + + +* Expose a simple application through various types of Services +* Understand how each Service type handles source IP NAT +* Understand the tradeoffs involved in preserving source IP + + + + + + +## Source IP for Services with `Type=ClusterIP` + +Packets sent to ClusterIP from within the cluster are never source NAT'd if +you're running kube-proxy in +[iptables mode](/docs/concepts/services-networking/service/#proxy-mode-iptables), +(the default). You can query the kube-proxy mode by fetching +`http://localhost:10249/proxyMode` on the node where kube-proxy is running. + +```console +kubectl get nodes +``` +The output is similar to this: +``` +NAME STATUS ROLES AGE VERSION +kubernetes-node-6jst Ready 2h v1.13.0 +kubernetes-node-cx31 Ready 2h v1.13.0 +kubernetes-node-jj1t Ready 2h v1.13.0 +``` + +Get the proxy mode on one of the nodes (kube-proxy listens on port 10249): +```shell +# Run this in a shell on the node you want to query. +curl http://localhost:10249/proxyMode +``` +The output is: +``` +iptables +``` + +You can test source IP preservation by creating a Service over the source IP app: + +```shell +kubectl expose deployment source-ip-app --name=clusterip --port=80 --target-port=8080 +``` +The output is: +``` +service/clusterip exposed +``` +```shell +kubectl get svc clusterip +``` +The output is similar to: +``` +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +clusterip ClusterIP 10.0.170.92 80/TCP 51s +``` + +And hitting the `ClusterIP` from a pod in the same cluster: + +```shell +kubectl run busybox -it --image=busybox --restart=Never --rm +``` +The output is similar to this: +``` +Waiting for pod default/busybox to be running, status is Pending, pod ready: false +If you don't see a command prompt, try pressing enter. + +``` +You can then run a command inside that Pod: + +```shell +# Run this inside the terminal from "kubectl run" +ip addr +``` +``` +1: lo: mtu 65536 qdisc noqueue + link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 + inet 127.0.0.1/8 scope host lo + valid_lft forever preferred_lft forever + inet6 ::1/128 scope host + valid_lft forever preferred_lft forever +3: eth0: mtu 1460 qdisc noqueue + link/ether 0a:58:0a:f4:03:08 brd ff:ff:ff:ff:ff:ff + inet 10.244.3.8/24 scope global eth0 + valid_lft forever preferred_lft forever + inet6 fe80::188a:84ff:feb0:26a5/64 scope link + valid_lft forever preferred_lft forever +``` + +…then use `wget` to query the local webserver +```shell +# Replace 10.0.170.92 with the Pod's IPv4 address +wget -qO - 10.0.170.92 +``` +``` +CLIENT VALUES: +client_address=10.244.3.8 +command=GET +... +``` +The `client_address` is always the client pod's IP address, whether the client pod and server pod are in the same node or in different nodes. + +## Source IP for Services with `Type=NodePort` + +Packets sent to Services with +[`Type=NodePort`](/docs/concepts/services-networking/service/#nodeport) +are source NAT'd by default. You can test this by creating a `NodePort` Service: + +```shell +kubectl expose deployment source-ip-app --name=nodeport --port=80 --target-port=8080 --type=NodePort +``` +The output is: +``` +service/nodeport exposed +``` + +```shell +NODEPORT=$(kubectl get -o jsonpath="{.spec.ports[0].nodePort}" services nodeport) +NODES=$(kubectl get nodes -o jsonpath='{ $.items[*].status.addresses[?(@.type=="ExternalIP")].address }') +``` + +If you're running on a cloud provider, you may need to open up a firewall-rule +for the `nodes:nodeport` reported above. +Now you can try reaching the Service from outside the cluster through the node +port allocated above. + +```shell +for node in $NODES; do curl -s $node:$NODEPORT | grep -i client_address; done +``` +The output is similar to: +``` +client_address=10.180.1.1 +client_address=10.240.0.5 +client_address=10.240.0.3 +``` + +Note that these are not the correct client IPs, they're cluster internal IPs. This is what happens: + +* Client sends packet to `node2:nodePort` +* `node2` replaces the source IP address (SNAT) in the packet with its own IP address +* `node2` replaces the destination IP on the packet with the pod IP +* packet is routed to node 1, and then to the endpoint +* the pod's reply is routed back to node2 +* the pod's reply is sent back to the client + +Visually: + +``` + client + \ ^ + \ \ + v \ + node 1 <--- node 2 + | ^ SNAT + | | ---> + v | + endpoint +``` + + +To avoid this, Kubernetes has a feature to +[preserve the client source IP](/docs/tasks/access-application-cluster/create-external-load-balancer/#preserving-the-client-source-ip). +If you set `service.spec.externalTrafficPolicy` to the value `Local`, +kube-proxy only proxies proxy requests to local endpoints, and does not +forward traffic to other nodes. This approach preserves the original +source IP address. If there are no local endpoints, packets sent to the +node are dropped, so you can rely on the correct source-ip in any packet +processing rules you might apply a packet that make it through to the +endpoint. + +Set the `service.spec.externalTrafficPolicy` field as follows: + +```shell +kubectl patch svc nodeport -p '{"spec":{"externalTrafficPolicy":"Local"}}' +``` +The output is: +``` +service/nodeport patched +``` + +Now, re-run the test: + +```shell +for node in $NODES; do curl --connect-timeout 1 -s $node:$NODEPORT | grep -i client_address; done +``` +The output is similar to: +``` +client_address=198.51.100.79 +``` + +Note that you only got one reply, with the *right* client IP, from the one node on which the endpoint pod +is running. + +This is what happens: + +* client sends packet to `node2:nodePort`, which doesn't have any endpoints +* packet is dropped +* client sends packet to `node1:nodePort`, which *does* have endpoints +* node1 routes packet to endpoint with the correct source IP + +Visually: + +``` + client + ^ / \ + / / \ + / v X + node 1 node 2 + ^ | + | | + | v + endpoint +``` + + + +## Source IP for Services with `Type=LoadBalancer` + +Packets sent to Services with +[`Type=LoadBalancer`](/docs/concepts/services-networking/service/#loadbalancer) +are source NAT'd by default, because all schedulable Kubernetes nodes in the +`Ready` state are eligible for load-balanced traffic. So if packets arrive +at a node without an endpoint, the system proxies it to a node *with* an +endpoint, replacing the source IP on the packet with the IP of the node (as +described in the previous section). + +You can test this by exposing the source-ip-app through a load balancer: + +```shell +kubectl expose deployment source-ip-app --name=loadbalancer --port=80 --target-port=8080 --type=LoadBalancer +``` +The output is: +``` +service/loadbalancer exposed +``` + +Print out the IP addresses of the Service: +```console +kubectl get svc loadbalancer +``` +The output is similar to this: +``` +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +loadbalancer LoadBalancer 10.0.65.118 203.0.113.140 80/TCP 5m +``` + +Next, send a request to this Service's external-ip: + +```shell +curl 203.0.113.140 +``` +The output is similar to this: +``` +CLIENT VALUES: +client_address=10.240.0.5 +... +``` + +However, if you're running on Google Kubernetes Engine/GCE, setting the same `service.spec.externalTrafficPolicy` +field to `Local` forces nodes *without* Service endpoints to remove +themselves from the list of nodes eligible for loadbalanced traffic by +deliberately failing health checks. + +Visually: + +``` + client + | + lb VIP + / ^ + v / +health check ---> node 1 node 2 <--- health check + 200 <--- ^ | ---> 500 + | V + endpoint +``` + +You can test this by setting the annotation: + +```shell +kubectl patch svc loadbalancer -p '{"spec":{"externalTrafficPolicy":"Local"}}' +``` + +You should immediately see the `service.spec.healthCheckNodePort` field allocated +by Kubernetes: + +```shell +kubectl get svc loadbalancer -o yaml | grep -i healthCheckNodePort +``` +The output is similar to this: +```yaml + healthCheckNodePort: 32122 +``` + +The `service.spec.healthCheckNodePort` field points to a port on every node +serving the health check at `/healthz`. You can test this: + +```shell +kubectl get pod -o wide -l run=source-ip-app +``` +The output is similar to this: +``` +NAME READY STATUS RESTARTS AGE IP NODE +source-ip-app-826191075-qehz4 1/1 Running 0 20h 10.180.1.136 kubernetes-node-6jst +``` + +Use `curl` to fetch the `/healthz` endpoint on various nodes: +```shell +# Run this locally on a node you choose +curl localhost:32122/healthz +``` +``` +1 Service Endpoints found +``` + +On a different node you might get a different result: +```shell +# Run this locally on a node you choose +curl localhost:32122/healthz +``` +``` +No Service Endpoints Found +``` + +A controller running on the +{{< glossary_tooltip text="control plane" term_id="control-plane" >}} is +responsible for allocating the cloud load balancer. The same controller also +allocates HTTP health checks pointing to this port/path on each node. Wait +about 10 seconds for the 2 nodes without endpoints to fail health checks, +then use `curl` to query the IPv4 address of the load balancer: + +```shell +curl 203.0.113.140 +``` +The output is similar to this: +``` +CLIENT VALUES: +client_address=198.51.100.79 +... +``` + +## Cross-platform support + +Only some cloud providers offer support for source IP preservation through +Services with `Type=LoadBalancer`. +The cloud provider you're running on might fulfill the request for a loadbalancer +in a few different ways: + +1. With a proxy that terminates the client connection and opens a new connection +to your nodes/endpoints. In such cases the source IP will always be that of the +cloud LB, not that of the client. + +2. With a packet forwarder, such that requests from the client sent to the +loadbalancer VIP end up at the node with the source IP of the client, not +an intermediate proxy. + +Load balancers in the first category must use an agreed upon +protocol between the loadbalancer and backend to communicate the true client IP +such as the HTTP [Forwarded](https://tools.ietf.org/html/rfc7239#section-5.2) +or [X-FORWARDED-FOR](https://en.wikipedia.org/wiki/X-Forwarded-For) +headers, or the +[proxy protocol](http://www.haproxy.org/download/1.5/doc/proxy-protocol.txt). +Load balancers in the second category can leverage the feature described above +by creating an HTTP health check pointing at the port stored in +the `service.spec.healthCheckNodePort` field on the Service. + + + +## {{% heading "cleanup" %}} + + +Delete the Services: + +```shell +kubectl delete svc -l run=source-ip-app +``` + +Delete the Deployment, ReplicaSet and Pod: + +```shell +kubectl delete deployment source-ip-app +``` + + + +## {{% heading "whatsnext" %}} + +* Learn more about [connecting applications via services](/docs/concepts/services-networking/connect-applications-service/) +* Read how to [Create an External Load Balancer](https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/) + + From f0f09085720839fe5bf273118fa136a0d28684b2 Mon Sep 17 00:00:00 2001 From: TAKAHASHI Shuuji Date: Thu, 23 Jul 2020 18:00:09 +0900 Subject: [PATCH 2/2] Translate tutorials/services/source-ip into Japanese. --- content/ja/docs/tutorials/services/_index.md | 2 +- .../ja/docs/tutorials/services/source-ip.md | 293 ++++++++---------- 2 files changed, 132 insertions(+), 163 deletions(-) diff --git a/content/ja/docs/tutorials/services/_index.md b/content/ja/docs/tutorials/services/_index.md index 4356a61fec714..2a2a0eb7b6593 100755 --- a/content/ja/docs/tutorials/services/_index.md +++ b/content/ja/docs/tutorials/services/_index.md @@ -1,5 +1,5 @@ --- -title: "Services" +title: "Service" weight: 70 --- diff --git a/content/ja/docs/tutorials/services/source-ip.md b/content/ja/docs/tutorials/services/source-ip.md index 03a9bb097c658..69c626d53294c 100644 --- a/content/ja/docs/tutorials/services/source-ip.md +++ b/content/ja/docs/tutorials/services/source-ip.md @@ -1,86 +1,73 @@ --- -title: Using Source IP +title: 送信元IPを使用する content_type: tutorial min-kubernetes-server-version: v1.5 --- -Applications running in a Kubernetes cluster find and communicate with each -other, and the outside world, through the Service abstraction. This document -explains what happens to the source IP of packets sent to different types -of Services, and how you can toggle this behavior according to your needs. - - +Kubernetesクラスター内で実行されているアプリケーションは、Serviceという抽象化を経由して、他のアプリケーションや外の世界との発見や通信を行います。このドキュメントでは、異なる種類のServiceに送られたパケットの送信元IPに何が起こるのか、そして必要に応じてこの振る舞いを切り替える方法について説明します。 ## {{% heading "prerequisites" %}} +### 用語 -### Terminology - -This document makes use of the following terms: +このドキュメントでは、以下の用語を使用します。 {{< comment >}} If localizing this section, link to the equivalent Wikipedia pages for the target localization. {{< /comment >}} -[NAT](https://en.wikipedia.org/wiki/Network_address_translation) -: network address translation +[NAT](https://ja.wikipedia.org/wiki/%E3%83%8D%E3%83%83%E3%83%88%E3%83%AF%E3%83%BC%E3%82%AF%E3%82%A2%E3%83%89%E3%83%AC%E3%82%B9%E5%A4%89%E6%8F%9B) +: ネットワークアドレス変換(network address translation) -[Source NAT](https://en.wikipedia.org/wiki/Network_address_translation#SNAT) -: replacing the source IP on a packet; in this page, that usually means replacing with the IP address of a node. +[送信元NAT](https://en.wikipedia.org/wiki/Network_address_translation#SNAT) +: パケットの送信元のIPを置換します。このページでは、通常ノードのIPアドレスを置換することを意味します。 -[Destination NAT](https://en.wikipedia.org/wiki/Network_address_translation#DNAT) -: replacing the destination IP on a packet; in this page, that usually means replacing with the IP address of a {{< glossary_tooltip term_id="pod" >}} +[送信先NAT](https://en.wikipedia.org/wiki/Network_address_translation#DNAT) +: パケットの送信先のIPを置換します。このページでは、通常{{< glossary_tooltip term_id="pod" >}}のIPアドレスを置換することを意味します。 -[VIP](/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies) -: a virtual IP address, such as the one assigned to every {{< glossary_tooltip text="Service" term_id="service" >}} in Kubernetes +[VIP](/ja/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies) +: Kubernetes内のすべての{{< glossary_tooltip text="Service" term_id="service" >}}などに割り当てられる仮想IPアドレス(virtual IP address)です。 -[kube-proxy](/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies) -: a network daemon that orchestrates Service VIP management on every node +[kube-proxy](/ja/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies) +: すべてのノード上でServiceのVIPを管理するネットワークデーモンです。 -### Prerequisites +### 前提条件 {{< include "task-tutorial-prereqs.md" >}} -The examples use a small nginx webserver that echoes back the source -IP of requests it receives through an HTTP header. You can create it as follows: +以下の例では、HTTPヘッダー経由で受け取ったリクエストの送信元IPをエコーバックする、小さなnginxウェブサーバーを使用します。次のコマンドでウェブサーバーを作成できます。 ```shell kubectl create deployment source-ip-app --image=k8s.gcr.io/echoserver:1.4 ``` -The output is: + +出力は次のようになります。 + ``` deployment.apps/source-ip-app created ``` - - ## {{% heading "objectives" %}} - -* Expose a simple application through various types of Services -* Understand how each Service type handles source IP NAT -* Understand the tradeoffs involved in preserving source IP - - - +* 単純なアプリケーションを様々な種類のService経由で公開する +* それぞれの種類のServiceがどのように送信元IPのNATを扱うかを理解する +* 送信元IPを保持することに関わるトレードオフを理解する -## Source IP for Services with `Type=ClusterIP` +## `Type=ClusterIP`を使用したServiceでの送信元IP -Packets sent to ClusterIP from within the cluster are never source NAT'd if -you're running kube-proxy in -[iptables mode](/docs/concepts/services-networking/service/#proxy-mode-iptables), -(the default). You can query the kube-proxy mode by fetching -`http://localhost:10249/proxyMode` on the node where kube-proxy is running. +kube-proxyが[iptablesモード](/ja/docs/concepts/services-networking/service/#proxy-mode-iptables)(デフォルト)で実行されている場合、クラスター内部からClusterIPに送られたパケットに送信元のNATが行われることは決してありません。kube-proxyが実行されているノード上で`http://localhost:10249/proxyMode`にリクエストを送って、kube-proxyのモードを問い合わせてみましょう。 ```console kubectl get nodes ``` -The output is similar to this: + +出力は次のようになります。 + ``` NAME STATUS ROLES AGE VERSION kubernetes-node-6jst Ready 2h v1.13.0 @@ -88,49 +75,59 @@ kubernetes-node-cx31 Ready 2h v1.13.0 kubernetes-node-jj1t Ready 2h v1.13.0 ``` -Get the proxy mode on one of the nodes (kube-proxy listens on port 10249): +これらのノードの1つでproxyモードを取得します(kube-proxyはポート10249をlistenしています)。 + ```shell -# Run this in a shell on the node you want to query. +# このコマンドは、問い合わせを行いたいノード上のシェルで実行してください。 curl http://localhost:10249/proxyMode ``` -The output is: + +出力は次のようになります。 + ``` iptables ``` -You can test source IP preservation by creating a Service over the source IP app: +source IPアプリのServiceを作成することで、送信元IPが保持されているかテストできます。 ```shell kubectl expose deployment source-ip-app --name=clusterip --port=80 --target-port=8080 ``` -The output is: + +出力は次のようになります。 + ``` service/clusterip exposed ``` ```shell kubectl get svc clusterip ``` -The output is similar to: + +出力は次のようになります。 + ``` NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE clusterip ClusterIP 10.0.170.92 80/TCP 51s ``` -And hitting the `ClusterIP` from a pod in the same cluster: +そして、同じクラスター上のPodから`ClusterIP`にアクセスします。 ```shell kubectl run busybox -it --image=busybox --restart=Never --rm ``` -The output is similar to this: + +出力は次のようになります。 + ``` Waiting for pod default/busybox to be running, status is Pending, pod ready: false If you don't see a command prompt, try pressing enter. ``` -You can then run a command inside that Pod: + +これで、Podの内部でコマンドが実行できます。 ```shell -# Run this inside the terminal from "kubectl run" +# このコマンドは、"kubectl run" のターミナルの内部で実行してください ip addr ``` ``` @@ -148,9 +145,10 @@ ip addr valid_lft forever preferred_lft forever ``` -…then use `wget` to query the local webserver +そして、`wget`を使用してローカルのウェブサーバーに問い合わせます。 + ```shell -# Replace 10.0.170.92 with the Pod's IPv4 address +# 10.0.170.92 の部分をウェブサーバーのPodのIPv4アドレスに置き換えてください wget -qO - 10.0.170.92 ``` ``` @@ -159,18 +157,19 @@ client_address=10.244.3.8 command=GET ... ``` -The `client_address` is always the client pod's IP address, whether the client pod and server pod are in the same node or in different nodes. -## Source IP for Services with `Type=NodePort` +`client_address`は常にクライアントのPodのIPアドレスになります。これは、クライアントのPodとサーバーのPodが同じノード内にあっても異なるノードにあっても変わりません。 + +## `Type=NodePort`を使用したServiceでの送信元IP -Packets sent to Services with -[`Type=NodePort`](/docs/concepts/services-networking/service/#nodeport) -are source NAT'd by default. You can test this by creating a `NodePort` Service: +[`Type=NodePort`](/ja/docs/concepts/services-networking/service/#nodeport)を使用したServiceに送られたパケットは、デフォルトで送信元のNATが行われます。`NodePort` Serviceを作ることでテストできます。 ```shell kubectl expose deployment source-ip-app --name=nodeport --port=80 --target-port=8080 --type=NodePort ``` -The output is: + +出力は次のようになります。 + ``` service/nodeport exposed ``` @@ -180,31 +179,30 @@ NODEPORT=$(kubectl get -o jsonpath="{.spec.ports[0].nodePort}" services nodeport NODES=$(kubectl get nodes -o jsonpath='{ $.items[*].status.addresses[?(@.type=="ExternalIP")].address }') ``` -If you're running on a cloud provider, you may need to open up a firewall-rule -for the `nodes:nodeport` reported above. -Now you can try reaching the Service from outside the cluster through the node -port allocated above. +クラウドプロバイダーで実行する場合、上に示した`nodes:nodeport`に対してファイアウォールのルールを作成する必要があるかもしれません。それでは、上で割り当てたノードポート経由で、クラスターの外部からServiceにアクセスしてみましょう。 ```shell for node in $NODES; do curl -s $node:$NODEPORT | grep -i client_address; done ``` -The output is similar to: + +出力は次のようになります。 + ``` client_address=10.180.1.1 client_address=10.240.0.5 client_address=10.240.0.3 ``` -Note that these are not the correct client IPs, they're cluster internal IPs. This is what happens: +これらは正しいクライアントIPではなく、クラスターのinternal IPであることがわかります。ここでは、次のようなことが起こっています。 -* Client sends packet to `node2:nodePort` -* `node2` replaces the source IP address (SNAT) in the packet with its own IP address -* `node2` replaces the destination IP on the packet with the pod IP -* packet is routed to node 1, and then to the endpoint -* the pod's reply is routed back to node2 -* the pod's reply is sent back to the client +* クライアントがパケットを`node2:nodePort`に送信する +* `node2`は、パケット内の送信元IPアドレスを自ノードのIPアドレスに置換する(SNAT) +* `node2`は、パケット内の送信先IPアドレスをPodのIPアドレスに置換する +* パケットはnode1にルーティングされ、endpointにルーティングされる +* Podからの応答がnode2にルーティングされて戻ってくる +* Podからの応答がクライアントに送り返される -Visually: +図で表すと次のようになります。 ``` client @@ -218,48 +216,42 @@ Visually: endpoint ``` +クライアントのIPが失われることを回避するために、Kubernetesには[クライアントの送信元IPを保持する](/docs/tasks/access-application-cluster/create-external-load-balancer/#preserving-the-client-source-ip)機能があります。`service.spec.externalTrafficPolicy`の値を`Local`に設定すると、kube-proxyはローカルに存在するエンドポイントへのプロキシーリクエストだけをプロキシーし、他のノードへはトラフィックを転送しなくなります。このアプローチでは、オリジナルの送信元IPアドレスが保持されます。ローカルにエンドポイントが存在しない場合には、そのノードに送信されたパケットは損失します。そのため、エンドポイントに到達するパケットに適用する可能性のあるパケット処理ルールでは、送信元IPが正しいことを信頼できます。 -To avoid this, Kubernetes has a feature to -[preserve the client source IP](/docs/tasks/access-application-cluster/create-external-load-balancer/#preserving-the-client-source-ip). -If you set `service.spec.externalTrafficPolicy` to the value `Local`, -kube-proxy only proxies proxy requests to local endpoints, and does not -forward traffic to other nodes. This approach preserves the original -source IP address. If there are no local endpoints, packets sent to the -node are dropped, so you can rely on the correct source-ip in any packet -processing rules you might apply a packet that make it through to the -endpoint. - -Set the `service.spec.externalTrafficPolicy` field as follows: +次のようにして`service.spec.externalTrafficPolicy`フィールドを設定します。 ```shell kubectl patch svc nodeport -p '{"spec":{"externalTrafficPolicy":"Local"}}' ``` -The output is: + +出力は次のようになります。 + ``` service/nodeport patched ``` -Now, re-run the test: +そして、再度テストしてみます。 ```shell for node in $NODES; do curl --connect-timeout 1 -s $node:$NODEPORT | grep -i client_address; done ``` -The output is similar to: + +出力は次のようになります。 + ``` client_address=198.51.100.79 ``` -Note that you only got one reply, with the *right* client IP, from the one node on which the endpoint pod -is running. +今度は、*正しい*クライアントIPが含まれる応答が1つだけ得られました。これは、エンドポイントのPodが実行されているノードから来たものです。 -This is what happens: +ここでは、次のようなことが起こっています。 -* client sends packet to `node2:nodePort`, which doesn't have any endpoints -* packet is dropped -* client sends packet to `node1:nodePort`, which *does* have endpoints -* node1 routes packet to endpoint with the correct source IP +* クライアントがパケットをエンドポイントが存在しない`node2:nodePort`に送信する +* パケットが損失する +* クライアントがパケットをエンドポイントが*存在する*`node1:nodePort`に送信する +* node1は、正しい送信元IPを持つパケットをエンドポイントにルーティングする -Visually: +図で表すと次のようになります。 ``` client @@ -273,56 +265,52 @@ Visually: endpoint ``` +## `Type=LoadBalancer`を使用したServiceでの送信元IP +[`Type=LoadBalancer`](/ja/docs/concepts/services-networking/service/#loadbalancer)を使用したServiceに送られたパケットは、デフォルトでは送信元のNATは行われません。`Ready`状態にあるすべてのスケジュール可能なKubernetesのNodeは、ロードバランサーからのトラフィックを受付可能であるためです。そのため、エンドポイントが存在しないノードにパケットが到達した場合、システムはエンドポイントが*存在する*ノードにパケットをプロシキーします。このとき、(前のセクションで説明したように)パケットの送信元IPがノードのIPに置換されます。 -## Source IP for Services with `Type=LoadBalancer` - -Packets sent to Services with -[`Type=LoadBalancer`](/docs/concepts/services-networking/service/#loadbalancer) -are source NAT'd by default, because all schedulable Kubernetes nodes in the -`Ready` state are eligible for load-balanced traffic. So if packets arrive -at a node without an endpoint, the system proxies it to a node *with* an -endpoint, replacing the source IP on the packet with the IP of the node (as -described in the previous section). - -You can test this by exposing the source-ip-app through a load balancer: +ロードバランサー経由でsource-ip-appを公開することで、これをテストできます。 ```shell kubectl expose deployment source-ip-app --name=loadbalancer --port=80 --target-port=8080 --type=LoadBalancer ``` -The output is: + +出力は次のようになります。 + ``` service/loadbalancer exposed ``` -Print out the IP addresses of the Service: +ServiceのIPアドレスを表示します。 + ```console kubectl get svc loadbalancer ``` -The output is similar to this: + +出力は次のようになります。 + ``` NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE loadbalancer LoadBalancer 10.0.65.118 203.0.113.140 80/TCP 5m ``` -Next, send a request to this Service's external-ip: +次に、Serviceのexternal-ipにリクエストを送信します。 ```shell curl 203.0.113.140 ``` -The output is similar to this: + +出力は次のようになります。 + ``` CLIENT VALUES: client_address=10.240.0.5 ... ``` -However, if you're running on Google Kubernetes Engine/GCE, setting the same `service.spec.externalTrafficPolicy` -field to `Local` forces nodes *without* Service endpoints to remove -themselves from the list of nodes eligible for loadbalanced traffic by -deliberately failing health checks. +しかし、Google Kubernetes EngineやGCE上で実行している場合、同じ`service.spec.externalTrafficPolicy`フィールドを`Local`に設定すると、ロードバランサーからのトラフィックを受け付け可能なノードのリストから、Serviceエンドポイントが*存在しない*ノードが強制的に削除されます。この動作は、ヘルスチェックを意図的に失敗させることによって実現されています。 -Visually: +図で表すと次のようになります。 ``` client @@ -330,123 +318,104 @@ Visually: lb VIP / ^ v / -health check ---> node 1 node 2 <--- health check +ヘルスチェック ---> node 1 node 2 <--- ヘルスチェック 200 <--- ^ | ---> 500 | V endpoint ``` -You can test this by setting the annotation: +アノテーションを設定することで動作をテストできます。 ```shell kubectl patch svc loadbalancer -p '{"spec":{"externalTrafficPolicy":"Local"}}' ``` -You should immediately see the `service.spec.healthCheckNodePort` field allocated -by Kubernetes: +Kubernetesにより割り当てられた`service.spec.healthCheckNodePort`フィールドをすぐに確認します。 ```shell kubectl get svc loadbalancer -o yaml | grep -i healthCheckNodePort ``` -The output is similar to this: + +出力は次のようになります。 + ```yaml healthCheckNodePort: 32122 ``` -The `service.spec.healthCheckNodePort` field points to a port on every node -serving the health check at `/healthz`. You can test this: +`service.spec.healthCheckNodePort`フィールドは、`/healthz`でhealth checkを配信しているすべてのノード上のポートを指しています。次のコマンドでテストできます。 ```shell kubectl get pod -o wide -l run=source-ip-app ``` -The output is similar to this: + +出力は次のようになります。 + ``` NAME READY STATUS RESTARTS AGE IP NODE source-ip-app-826191075-qehz4 1/1 Running 0 20h 10.180.1.136 kubernetes-node-6jst ``` -Use `curl` to fetch the `/healthz` endpoint on various nodes: +`curl`を使用して、さまざまなノード上の`/healthz`エンドポイントからデータを取得します。 + ```shell -# Run this locally on a node you choose +# このコマンドは選んだノードのローカル上で実行してください curl localhost:32122/healthz ``` ``` 1 Service Endpoints found ``` -On a different node you might get a different result: +ノードが異なると、得られる結果も異なる可能性があります。 + ```shell -# Run this locally on a node you choose +# このコマンドは、選んだノード上でローカルに実行してください curl localhost:32122/healthz ``` ``` No Service Endpoints Found ``` -A controller running on the -{{< glossary_tooltip text="control plane" term_id="control-plane" >}} is -responsible for allocating the cloud load balancer. The same controller also -allocates HTTP health checks pointing to this port/path on each node. Wait -about 10 seconds for the 2 nodes without endpoints to fail health checks, -then use `curl` to query the IPv4 address of the load balancer: +{{< glossary_tooltip text="コントロールプレーン" term_id="control-plane" >}}上で実行中のコントローラーは、クラウドのロードバランサーを割り当てる責任があります。同じコントローラーは、各ノード上のポートやパスを指すHTTPのヘルスチェックも割り当てます。エンドポイントが存在しない2つのノードがヘルスチェックに失敗するまで約10秒待った後、`curl`を使用してロードバランサーのIPv4アドレスに問い合わせます。 ```shell curl 203.0.113.140 ``` -The output is similar to this: + +出力は次のようになります。 + ``` CLIENT VALUES: client_address=198.51.100.79 ... ``` -## Cross-platform support - -Only some cloud providers offer support for source IP preservation through -Services with `Type=LoadBalancer`. -The cloud provider you're running on might fulfill the request for a loadbalancer -in a few different ways: - -1. With a proxy that terminates the client connection and opens a new connection -to your nodes/endpoints. In such cases the source IP will always be that of the -cloud LB, not that of the client. +## クロスプラットフォームのサポート -2. With a packet forwarder, such that requests from the client sent to the -loadbalancer VIP end up at the node with the source IP of the client, not -an intermediate proxy. +`Type=LoadBalancer`を使用したServiceで送信元IPを保持する機能を提供しているのは一部のクラウドプロバイダだけです。実行しているクラウドプロバイダによっては、以下のように異なる方法でリクエストを満たす場合があります。 -Load balancers in the first category must use an agreed upon -protocol between the loadbalancer and backend to communicate the true client IP -such as the HTTP [Forwarded](https://tools.ietf.org/html/rfc7239#section-5.2) -or [X-FORWARDED-FOR](https://en.wikipedia.org/wiki/X-Forwarded-For) -headers, or the -[proxy protocol](http://www.haproxy.org/download/1.5/doc/proxy-protocol.txt). -Load balancers in the second category can leverage the feature described above -by creating an HTTP health check pointing at the port stored in -the `service.spec.healthCheckNodePort` field on the Service. +1. クライアントとのコネクションをプロキシーが終端し、ノードやエンドポイントとの接続には新しいコネクションが開かれる。このような場合、送信元IPは常にクラウドのロードバランサーのものになり、クライアントのIPにはなりません。 +2. クライアントからロードバランサーのVIPに送信されたリクエストが、中間のプロキシーではなく、クライアントの送信元IPとともにノードまで到達するようなパケット転送が使用される。 +1つめのカテゴリーのロードバランサーの場合、真のクライアントIPと通信するために、 HTTPの[Forwarded](https://tools.ietf.org/html/rfc7239#section-5.2)ヘッダーや[X-FORWARDED-FOR](https://ja.wikipedia.org/wiki/X-Forwarded-For)ヘッダー、[proxy protocol](http://www.haproxy.org/download/1.5/doc/proxy-protocol.txt)などの、ロードバランサーとバックエンドの間で合意されたプロトコルを使用する必要があります。2つ目のカテゴリーのロードバランサーの場合、Serviceの`service.spec.healthCheckNodePort`フィールドに保存されたポートを指すHTTPのヘルスチェックを作成することで、上記の機能を活用できます。 ## {{% heading "cleanup" %}} - -Delete the Services: +Serviceを削除します。 ```shell kubectl delete svc -l run=source-ip-app ``` -Delete the Deployment, ReplicaSet and Pod: +Deployment、ReplicaSet、Podを削除します。 ```shell kubectl delete deployment source-ip-app ``` - - ## {{% heading "whatsnext" %}} -* Learn more about [connecting applications via services](/docs/concepts/services-networking/connect-applications-service/) -* Read how to [Create an External Load Balancer](https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/) +* [Service経由でアプリケーションに接続する](/ja/docs/concepts/services-networking/connect-applications-service/)方法についてさらに学ぶ。 +* [External Load Balancerを作成する](/docs/tasks/access-application-cluster/create-external-load-balancer/)方法について学ぶ。