This tutorial demonstrates how to use yurtctl
to install/uninstall OpenYurt.
Please refer to the Getting Started section on the README page to prepare and build binary to _output/bin/yurtctl
.
We assume a minikube cluster (version 1.14 or less)
is installed.
Let us use yurtctl
to convert a standard Kubernetes cluster to an OpenYurt cluster.
- Run the following command
$ _output/bin/yurtctl convert --provider minikube
yurtctl
will install all required components and reset the kubelet in the edge node. The output looks like:
convert.go:148] mark minikube as the edge-node
convert.go:178] deploy the yurt controller manager
convert.go:190] deploying the yurt-hub and resetting the kubelet service...
util.go:137] servant job(node-servant-convert-minikube) has succeeded
- yurt controller manager and yurthub Pods will be up and running in one minute. Let us verify them:
$ kubectl get deploy yurt-controller-manager -n kube-system
NAME READY UP-TO-DATE AVAILABLE AGE
yurt-ctrl-mgr 1/1 1 1 23h
$ kubectl get po yurt-hub-minikube -n kube-system
NAME READY STATUS RESTARTS AGE
yurt-hub-minikube 1/1 Running 0 23h
yurtctl
will mark all edge nodes as autonomous (only pods running on the autonomous edge nodes will be prevented from being evicted during disconnection) by default. As the minikube cluster only contains one node, the node will be marked as an autonomous edge node. Let us verify this by inspecting the node's labels and annotations:
$ kubectl describe node | grep Labels -A 5
Labels: openyurt.io/is-edge-worker=true
$ kubectl describe node | grep Annotations -A 5
Annotations: node.beta.openyurt.io/autonomy: true
By now, the OpenYurt cluster is ready. Users will not notice any differences compared to native Kubernetes when operating the cluster. If you login to the node, you will find the local cache has been populated:
$ minikube ssh
$ ls /etc/kubernetes/cache/kubelet/
configmaps events leases nodes pods secrets services
To test if edge node autonomy works as expected, we will simulate a node "offline" scenario.
kubectl apply -f-<<EOF
apiVersion: v1
kind: Pod
metadata:
name: bbox
spec:
nodeSelector:
openyurt.io/is-edge-worker: "true"
tolerations:
- key: "node.kubernetes.io/unreachable"
operator: "Exists"
effect: "NoExecute"
tolerationSeconds: 5
- key: "node.kubernetes.io/not-ready"
operator: "Exists"
effect: "NoExecute"
tolerationSeconds: 5
containers:
- image: busybox
command:
- top
name: bbox
EOF
- Make the edge node "offline" by changing the
yurthub
's server-addr to an unreachable address:
$ minikube ssh
$ sudo sed -i 's|--server-addr=.*|--server-addr=https://1.1.1.1:1111|' /etc/kubernetes/manifests/yurt-hub.yaml
- Now
yurthub
is disconnected from the apiserver and works in offline mode. To verify this, we can do the following:
$ minikube ssh
$ curl -s <http://127.0.0.1:10261>
{
"kind": "Status",
"metadata": {
},
"status": "Failure",
"message": "request( get : /) is not supported when cluster is unhealthy",
"reason": "BadRequest",
"code": 400
}
- After 40 seconds, the edge node status becomes
NotReady
, but the pod/bbox won't be evicted after another 5 seconds and keeps running on the node:
$ kubectl get node && kubectl get po
NAME STATUS ROLES AGE VERSION
minikube NotReady master 58m v1.18.2
NAME READY STATUS RESTARTS AGE
bbox 1/1 Running 0 19m
An OpenYurt cluster may consist of some edge nodes and some nodes in the cloud site.
yurtctl
allows users to specify a list of cloud nodes that won't be converted.
- Start with a two-nodes ack cluster,
$ kubectl get node
NAME STATUS ROLES AGE VERSION
us-west-1.192.168.0.87 Ready <none> 19h v1.14.8-aliyun.1
us-west-1.192.168.0.88 Ready <none> 19h v1.14.8-aliyun.1
- You can convert only one node to non-edge node(i.e., us-west-1.192.168.0.87) by using this command:
$ _output/bin/yurtctl convert --provider ack --cloud-nodes us-west-1.192.168.0.87
I0529 11:21:05.835781 9231 convert.go:145] mark us-west-1.192.168.0.87 as the cloud-node
I0529 11:21:05.861064 9231 convert.go:153] mark us-west-1.192.168.0.88 as the edge-node
I0529 11:21:05.951483 9231 convert.go:183] deploy the yurt controller manager
I0529 11:21:05.974443 9231 convert.go:195] deploying the yurt-hub and resetting the kubelet service...
I0529 11:21:26.075075 9231 util.go:147] servant job(node-servant-convert-us-west-1.192.168.0.88) has succeeded
Note: use "," to add more nodes (i.e., --cloud-nodes us-west-1.192.168.0.87,us-west-1.192.168.0.88).
- Node
us-west-1.192.168.0.87
will be marked as a non-edge node. You can verify this by inspecting its labels:
$ kubectl describe node us-west-1.192.168.0.87 | grep Labels
Labels: openyurt.io/is-edge-worker=false
- When the OpenYurt cluster contains cloud nodes, yurt controller manager will be deployed on the cloud node (in this
case, the node
us-west-1.192.168.0.87
):
$ kubectl get pods -A -o=custom-columns='NAME:.metadata.name,NODE:.spec.nodeName'
NAME NODE
yurt-controller-manager-6947f6f748-lxfdx us-west-1.192.168.0.87
Since version v0.2.0, users can setup the yurttunnel using yurtctl convert
.
Assume that the origin cluster is a two-nodes minikube cluster:
$ kubectl get node -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
minikube Ready master 72m v1.18.3 172.17.0.3 <none> Ubuntu 20.04 LTS 4.19.76-linuxkit docker://19.3.8
minikube-m02 Ready <none> 71m v1.18.3 172.17.0.4 <none> Ubuntu 20.04 LTS 4.19.76-linuxkit docker://19.3.8
Then, by simply running the yurtctl convert
command with the enabling of the option --deploy-yurttunnel
,
yurttunnel servers will be deployed on cloud nodes, and an yurttunnel agent will be deployed on every edge node.
$ yurtctl convert --deploy-yurttunnel --cloud-nodes minikube --provider minikube
I0831 12:35:51.719391 77322 convert.go:214] mark minikube as the cloud-node
I0831 12:35:51.728246 77322 convert.go:222] mark minikube-m02 as the edge-node
I0831 12:35:51.753830 77322 convert.go:251] the yurt-controller-manager is deployed
I0831 12:35:51.910440 77322 convert.go:270] yurt-tunnel-server is deployed
I0831 12:35:51.999384 77322 convert.go:278] yurt-tunnel-agent is deployed
I0831 12:35:51.999409 77322 convert.go:282] deploying the yurt-hub and resetting the kubelet service...
I0831 12:36:22.109338 77322 util.go:173] servant job(node-servant-convert-minikube-m02) has succeeded
I0831 12:36:22.109368 77322 convert.go:292] the yurt-hub is deployed
To verify that the yurttunnel works as expected, please refer to the yurttunnel tutorial
Using yurtctl
to revert an OpenYurt cluster can be done by doing the following:
$ _output/bin/yurtctl revert
revert.go:100] label openyurt.io/is-edge-worker is removed
revert.go:110] yurt controller manager is removed
revert.go:124] ServiceAccount node-controller is created
util.go:137] servant job(node-servant-revert-minikube-m02) has succeeded
revert.go:133] yurt-hub is removed, kubelet service is reset
Note that before performing the uninstall, please make sure all edge nodes are reachable from the apiserver.
In addition, the path of the kubelet service configuration can be set by the option --kubeadm-conf-path
,
and the path of the directory on edge node containing static pod files can be set by the option --pod-manifest-path
.
yurtctl init
will create an OpenYurt cluster, but the user needs to install the runtime in advance and ensure that the swap partition of the node has been closed.
Using yurtctl
to create an OpenYurt cluster can be done by doing the following:
$ _output/bin/yurtctl init --image-repository=registry.cn-hangzhou.aliyuncs.com/google_containers --kubernetes-version=v1.18.8 --pod-network-cidr=10.244.0.0/16
In addition, the OpenYurt components version can be set by the option --yurt-version
,
and the OpenYurt components image registry can be set by the option --yurt-image-registry
.
yurtctl join
will automatically install the corresponding kubelet according to the cluster version, but the user needs to install the runtime in advance and ensure that the swap partition of the node has been closed.
Using yurtctl
to join an Edge-Node to OpenYurt cluster can be by doing the following:
$ _output/bin/yurtctl join 1.2.3.4:6443 --token=zffaj3.a5vjzf09qn9ft3gt --node-type=edge-node --discovery-token-unsafe-skip-ca-verification --v=5
Using yurtctl
to join a Cloud-Node to OpenYurt cluster can be by doing the following:
$ _output/bin/yurtctl join 1.2.3.4:6443 --token=zffaj3.a5vjzf09qn9ft3gt --node-type=cloud-node --discovery-token-unsafe-skip-ca-verification --v=5
Using yurtctl
to revert any changes made to this host by yurtctl join
can be by doing the following:
$ _output/bin/yurtctl reset
yurtctl convert
will turn off the default nodelifecycle controller to allow the yurt-controller-mamanger to work properly.
If kube-controller-manager is deployed as a static pod, yurtctl can modify the kube-controller-manager.yaml
according the parameter --pod-manifest-path
with default value /etc/kubernetes/manifests
.
It is also suitable for kube-controller-manager high-availability scenarios.
But for kube-controller-manager deployed in other ways, the user needs to turn off the default nodelifecycle controller manually.
Please refer to the Disable the default nodelifecycle controller section. In addition, when using yurtctl revert
, if kube-controller-manager is not deployed through static file, the user also needs to restore manually.
Sometimes the configuration of the node may be different. Users can set the path of the kubelet service configuration
by the option --kubeadm-conf-path
, which is used by kubelet component to join the cluster on the edge node.
$ _output/bin/yurtctl convert --kubeadm-conf-path /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
The path of the directory on edge node containing static pod files can be set by the
option --pod-manifest-path
.
$ _output/bin/yurtctl convert --pod-manifest-path /etc/kubernetes/manifests
The default timeout value of cluster conversion is 2 minutes. Sometimes pulling the related images might take more than 2 minutes. To avoid the conversion failure due to pulling images timeout, you can:
- use mirrored image from aliyun container registry(ACR)
$ _output/bin/yurtctl convert --provider minikube \
--yurt-controller-manager-image registry.cn-hangzhou.aliyuncs.com/openyurt/yurt-controller-manager:latest \
--yurt-tunnel-agent-image registry.cn-hangzhou.aliyuncs.com/openyurt/yurt-tunnel-agent:latest \
--yurt-tunnel-server-image registry.cn-hangzhou.aliyuncs.com/openyurt/yurt-tunnel-server:latest \
--node-servant-image registry.cn-hangzhou.aliyuncs.com/openyurt/node-servant:latest \
--yurthub-image registry.cn-hangzhou.aliyuncs.com/openyurt/yurthub:latest
- or pull all images on the node manually
or use automation tools such as
broadcastjob
(from Kruise) in advance.
In case any adhoc failure makes the Kubelet fail to communicate with APIServer, one can recover the original Kubelet setup by running the following command in edge node directly:
$ sudo sed -i "s|--kubeconfig=.*kubelet.conf|--kubeconfig=/etc/kubernetes/kubelet.conf|g;" /etc/systemd/system/kubelet.service.d/10-kubeadm.conf && sudo systemctl daemon-reload && sudo systemctl restart kubelet.service
PodManifests path is the dir where k8s static pod YAML file located. Tools like kubeadm/minikube/kind all set that path to /etc/kubernetes/manifests. And we choose to follow that setting.
So if you manually change that setting, this will leads to a convert failure. To fix it, we recommend creating a soft link:
ln -s $yourSettingPath /etc/kubernetes/manifests