The components deployed on the service mesh by default are not exposed outside the cluster. An Ingress Gateway is deployed as a Kubernetes service of type LoadBalancer (or NodePort). To make Bookinfo accessible external to the cluster, you have to create an Istio Gateway
for the Bookinfo application and also define an Istio VirtualService
with the routes we need.
The ingress gateway gets exposed as a normal Kubernetes service of type LoadBalancer (or NodePort):
kubectl get svc istio-ingressgateway -n istio-system -o yaml
Because the Istio Ingress Gateway is an Envoy Proxy you can inspect it using the admin routes. First find the name of the istio-ingressgateway:
kubectl get pods -n istio-system
Copy and paste your ingress gateway's pod name. Execute:
kubectl -n istio-system exec -it <istio-ingressgateway-...> bash
You can view the statistics, listeners, routes, clusters and server info for the Envoy proxy by forwarding the local port:
curl localhost:15000/help
curl localhost:15000/stats
curl localhost:15000/listeners
curl localhost:15000/clusters
curl localhost:15000/server_info
See the admin docs for more details.
Also it can be helpful to look at the log files of the Istio ingress controller to see what request is being routed.
Before we check the logs, let us get out of the container back on the host:
exit
Now let us find the ingress pod and output the log:
kubectl logs istio-ingressgateway-... -n istio-system
Check the created Istio Gateway
and Istio VirtualService
to see the changes deployed:
kubectl get gateway
kubectl get gateway -o yaml
kubectl get virtualservices
kubectl get virtualservices -o yaml
kubectl get service istio-ingressgateway -n istio-system -o wide
To just get the first port of istio-ingressgateway service, we can run this:
kubectl get service istio-ingressgateway -n istio-system --template='{{(index .spec.ports 1).nodePort}}'
Modify you local /etc/hosts
file to add an entry for your sample application.
127.0.0.1. bookinfo.meshery.io
The HTTP port is usually 31380.
Or run these commands to retrieve the full URL:
echo "http://$(kubectl get nodes --selector=kubernetes.io/role!=master -o jsonpath={.items[0].status.addresses[?\(@.type==\"InternalIP\"\)].address}):$(kubectl get svc istio-ingressgateway -n istio-system -o jsonpath='{.spec.ports[1].nodePort}')/productpage"
The HTTP port is usually 31380.
Or run these commands to retrieve the full URL:
echo "http://$(kubectl get nodes --selector=kubernetes.io/role!=master -o jsonpath={.items[0].status.addresses[?\(@.type==\"InternalIP\"\)].address}):$(kubectl get svc istio-ingressgateway -n istio-system -o jsonpath='{.spec.ports[1].nodePort}')/productpage"
Before we start playing with Istio's traffic management capabilities we need to define the available versions of the deployed services. They are called subsets, in destination rules.
Using Meshery, navigate to the Custom yaml page, and apply the below to create the subsets for BookInfo:
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: productpage
spec:
host: productpage
subsets:
- name: v1
labels:
version: v1
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: reviews
spec:
host: reviews
subsets:
- name: v1
labels:
version: v1
- name: v2
labels:
version: v2
- name: v3
labels:
version: v3
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: ratings
spec:
host: ratings
subsets:
- name: v1
labels:
version: v1
- name: v2
labels:
version: v2
- name: v2-mysql
labels:
version: v2-mysql
- name: v2-mysql-vm
labels:
version: v2-mysql-vm
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: details
spec:
host: details
subsets:
- name: v1
labels:
version: v1
- name: v2
labels:
version: v2
This creates destination rules for each of the BookInfo services and defines version subsets
Manual step for can be found here
In a few seconds we should be able to verify the destination rules created by using the command below:
kubectl get destinationrules
kubectl get destinationrules -o yaml
Browse to the website of the Bookinfo. To view the product page, you will have to have a DNS configured for
bookinfo.meshery.io
to localhost
. Now visit bookinfo.meshery.io/productpage
to view the page.
Now, reload the page multiple times and notice how it round robins between v1, v2 and v3 of the reviews service.
The Detailed description of how it works can be found here.
To better understand the istio proxy, let's inspect the details. Let us exec
into the productpage pod to find the proxy details. To do so we need to first find the full pod name and then exec
into the istio-proxy container:
kubectl get pods
kubectl exec -it productpage-v1-... -c istio-proxy sh
Once in the container look at some of the envoy proxy details by inspecting it's config file:
ps aux
ls -l /etc/istio/proxy
cat /etc/istio/proxy/envoy-rev0.json
For more details on envoy proxy please check out their admin docs.
As a last step, lets exit the container:
exit
Run the following command to create default destination rules for the Bookinfo services:
kubectl apply -f samples/bookinfo/networking/destination-rule-all-mtls.yaml
Continue to Lab 4: Observability
Alternative, manual installation steps are provided for reference below. No need to execute these if you have performed the steps above.
Run the following command to create default destination rules for the Bookinfo services:
kubectl apply -f samples/bookinfo/networking/destination-rule-all-mtls.yaml
We can create a virtualservice & gateway for bookinfo app in the ingress gateway by running the following:
kubectl apply -f samples/bookinfo/networking/bookinfo-gateway.yaml