Tip
|
This demo is originally forked from Burr Sutter’s excellent presentation https://github.com/burrsutter/9stepsawesome. Refactored for extra conciseness targeting a lunch & learn demo session. If you have 2-3 hours to spare, please check out the original - Burr covers a lot more topic. |
This demo assumes you have setup the following:
-
Minikube
-
kubens
-
Dockers
-
jq
-
git
-
JDK
-
Apache Maven
$ kubectl get namespaces
NAME STATUS AGE
default Active 29m
kube-public Active 29m
kube-system Active 29m
$ kubectl create namespace opsdemo
$ kubectl get namespaces
NAME STATUS AGE
default Active 22h
opsdemo Active 6m
kube-public Active 21h
kube-system Active 22h
$ kubens
default
kube-public
kube-system
opsdemo
$ kubens opsdemo
Context "minikube" modified.
Active namespace is "opsdemo".
# tell your host's docker to use the one inside minikube VM
$ eval $(minikube docker-env)
# build the fat jar
$ cd hello/springboot
$ mvn clean package
# test the fat jar to make sure it works
$ java -jar target/boot-demo-0.0.1.jar
$ curl localhost:8080
# make the docker image, tag it as v1 (prints 'aloha')
$ docker build -t 9stepsawesome/myboot:v1 .
Now that we have a docker image, we can quickly spin up a 2 node app server using Kubernetes
Review the content of the Kubefile
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: myboot
name: myboot
spec:
replicas: 1
selector:
matchLabels:
app: myboot
template:
metadata:
labels:
app: myboot
spec:
containers:
- name: myboot
image: 9stepsawesome/myboot:v1
ports:
- containerPort: 8080
Note it’s using the docker image we built earlier 9stepsawesome/myboot:v1
# tell kubernetes to create a deployment of app servers
$ kubectl create -f kubefiles/myboot-deployment.yml
check what happened on another window
# watch kubectl get all
NAME READY STATUS RESTARTS AGE
pod/myboot-5d966db4f-vhfwq 1/1 Running 0 4s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 16s
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deployment.apps/myboot 1 1 1 1 4s
NAME DESIRED CURRENT READY AGE
replicaset.apps/myboot-5d966db4f 1 1 1 4s
Create a Service
This lightweight service acts as an intelligent HAProxy / load balancer to send http requests to the application nodes
$ kubectl create -f kubefiles/myboot-service.yml
observe what happened in the other window
NAME READY STATUS RESTARTS AGE pod/myboot-5d966db4f-vhfwq 1/1 Running 0 3m NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 3m service/myboot LoadBalancer 10.101.41.51 <pending> 8080:31416/TCP 20s NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE deployment.apps/myboot 1 1 1 1 3m NAME DESIRED CURRENT READY AGE replicaset.apps/myboot-5d966db4f 1 1 1 3m
Get the nodePort
$ kubectl get service/myboot -o jsonpath="{.spec.ports[*].nodePort}"
Curl that url + nodePort
$ curl $(minikube ip):$(kubectl get service/myboot -o jsonpath="{.spec.ports[*].nodePort}")
Perhaps build a little loop to curl that endpoint
while true
do
curl $(minikube ip):$(kubectl get service/myboot -o jsonpath="{.spec.ports[*].nodePort}")
sleep .5;
done
Let’s scale up the application to 2 replicas, there are several possible ways to achieve this result.
You can edit the myboot-deployment.yml, updating replicas
# edit the myboot-deployment.yml and set replica to 2
$ kubectl replace -f kubefiles/myboot-deployment.yml
Or use the kubectl scale command, now make it 3 replicas and see what happens
$ kubectl scale --replicas=3 deployment/myboot
When your application has issues and instances would die sporadically, manual restart is a pain. Kubernetes watches the instances and restarts them if any of them dies unexpectedly
# get the NAME of first pod
$ kubectl get pods -o json | jq -r '.items[0].metadata.name'
# kill the first pod and observe
$ kubectl delete pod $(kubectl get pods -o json | jq -r '.items[0].metadata.name')
Update MyRESTController.java
greeting = environment.getProperty("GREETING","Bonjour");
Compile & Build the fat jar
# cd into springboot directory
$ mvn clean package
You can test with "java -jar target/boot-demo-0.0.1.jar" and "curl localhost:8080". Ideally, you would have unit tests executed with "mvn test" as well.
Build the new docker image with a v2 tag
$ docker build -t 9stepsawesome/myboot:v2 .
$ docker images | grep myboot
Rollout the update
# in a separate window, watch kubectl get all
$ watch kubectl get all
Instruct kubernetes to switch out the docker image to v2
$ kubectl set image deployment/myboot myboot=9stepsawesome/myboot:v2
from the curl pulling you’ll see that nodes will drop off and new nodes will come online
curl: (7) Failed to connect to 192.168.64.10 port 31416: Connection refused Aloha from Spring Boot! 0 on myboot-5955897c9b-klsvz curl: (7) Failed to connect to 192.168.64.10 port 31416: Connection refused Bonjour from Spring Boot! 1 on myboot-5955897c9b-klsvz Bonjour from Spring Boot! 2 on myboot-5955897c9b-klsvz Bonjour from Spring Boot! 0 on myboot-5955897c9b-lxz77 Bonjour from Spring Boot! 1 on myboot-5955897c9b-lxz77 Bonjour from Spring Boot! 2 on myboot-5955897c9b-lxz77
Let’s undo the rollout
$ kubectl rollout undo deployment/myboot
observe in the curl window
curl: (7) Failed to connect to 192.168.64.10 port 31416: Connection refused
curl: (7) Failed to connect to 192.168.64.10 port 31416: Connection refused
curl: (7) Failed to connect to 192.168.64.10 port 31416: Connection refused
Aloha from Spring Boot! 0 on myboot-5d966db4f-d784z
Aloha from Spring Boot! 1 on myboot-5d966db4f-d784z
Aloha from Spring Boot! 0 on myboot-5d966db4f-z2b4d
Aloha from Spring Boot! 2 on myboot-5d966db4f-d784z
The trick to enable zero downtime deployment is to send traffic to new application instances only when they are UP and READY to process traffic, and gracefully shutdown old nodes in a rolling fashion.
In Kubernetes, we can achieve this in just 2 lines of YAML config.
To prepare the demo, let’s replace the current deployment with a slightly updated one
$ kubectl replace -f kubefiles/myboot-deployment-resources.yml
Add the Liveness and Readiness probe to your deployment yaml. (the updated yaml file is myboot-deployment-liveready.yml)
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: myboot
name: myboot
spec:
replicas: 2
selector:
matchLabels:
app: myboot
template:
metadata:
labels:
app: myboot
spec:
containers:
- name: myboot
image: 9stepsawesome/myboot:v1
ports:
- containerPort: 8080
envFrom:
resources:
requests:
memory: "300Mi"
cpu: "250m" # 1/4 core
limits:
memory: "400Mi"
cpu: "1000m" # 1 core
livenessProbe:
httpGet:
port: http
path: /
initialDelaySeconds: 10
periodSeconds: 5
timeoutSeconds: 2
readinessProbe:
httpGet:
path: /health
port: 8080
initialDelaySeconds: 10
periodSeconds: 3
and replace the current Deployment in the current environment
Tip
|
Do a diff of the two files and see the difference |
$ kubectl replace -f kubefiles/myboot-deployment-liveready.yml
You will still see a brief outage as kubernetes finishes swapping out the deployment
do a describe to see the new probes in place.
$ kubectl describe deployment/myboot
myboot:
Image: 9stepsawesome/myboot:v1
Port: 8080/TCP
Host Port: 0/TCP
Limits:
cpu: 1
memory: 400Mi
Requests:
cpu: 250m
memory: 300Mi
Liveness: http-get http://:http/ delay=10s timeout=2s period=5s #success=1 #failure=3
Readiness: http-get http://:8080/health delay=10s timeout=1s period=3s #success=1 #failure=3
Environment Variables from:
my-config ConfigMap Optional: false
Instruct kubernetes to scale to 3 nodes
$ kubectl scale deployment/myboot --replicas=3
now rollout the update using version 2
$ kubectl set image deployment/myboot myboot=9stepsawesome/myboot:v2
and there will no errors
Aloha from Spring Boot! 115 on myboot-859cbbfb98-lnc8q Aloha from Spring Boot! 116 on myboot-859cbbfb98-lnc8q Aloha from Spring Boot! 117 on myboot-859cbbfb98-lnc8q Bonjour from Spring Boot! 0 on myboot-5b686c586f-ccv5r Bonjour from Spring Boot! 1 on myboot-5b686c586f-ccv5r
Rolling back is also as clean
$ kubectl rollout undo deployment/myboot
Bonjour from Spring Boot! 30 on myboot-5b686c586f-ccv5r Bonjour from Spring Boot! 31 on myboot-5b686c586f-ccv5r Bonjour from Spring Boot! 32 on myboot-5b686c586f-ccv5r Aloha from Spring Boot! 0 on myboot-859cbbfb98-4rvl8 Aloha from Spring Boot! 1 on myboot-859cbbfb98-4rvl8
Delete the Demo namespace, this will remove all the running containers and services
$ kubectl delete namespaces opsdemo
namespace "demo" deleted
Shutdown minikube if you’d like
$ minikube stop
If minikube just sits at "Starting VM…" This probably means minikube did not stop properly. Crank up the verhbose mode to 3 and see whrere it actually gets stuck
$ minikube start -v3
Starting local Kubernetes v1.10.0 cluster...
Starting VM...
(minikube) Using UUID 20641208-fc0a-11e8-b0b9-f40f2435428b
(minikube) Generated MAC 36:de:a0:aa:93:75
(minikube) Starting with cmdline: loglevel=3 user=docker console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes base host=minikube
Waiting for SSH to be available...
If this is matches the symptom, delete the vm-driver VM machine PID (applicable to Windows and Mac, example below is for mac using Hyperkit)
$ rm ~/.minikube/machines/minikube/hyperkit.pid
If the step above does not solve the issue, go for the nuclear option
$ minikube delete
Deleting local Kubernetes cluster...
Machine deleted.