Skip to content

Commit

Permalink
docs: revise tutorial docs and yaml file (#255)
Browse files Browse the repository at this point in the history
  • Loading branch information
rambohe-ch authored Apr 9, 2021
1 parent 02aa604 commit c7c6023
Show file tree
Hide file tree
Showing 15 changed files with 147 additions and 166 deletions.
2 changes: 1 addition & 1 deletion config/setup/yurt-controller-manager.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -111,7 +111,7 @@ spec:
- weight: 1
preference:
matchExpressions:
- key: alibabacloud.com/is-edge-worker
- key: openyurt.io/is-edge-worker
operator: In
values:
- "false"
Expand Down
43 changes: 5 additions & 38 deletions config/setup/yurt-tunnel-agent.yaml
Original file line number Diff line number Diff line change
@@ -1,40 +1,3 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
name: yurt-tunnel-agent
rules:
- apiGroups:
- ""
resources:
- nodes/stats
- nodes/metrics
- nodes/log
- nodes/spec
- nodes/proxy
verbs:
- create
- get
- list
- watch
- delete
- update
- patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: yurt-tunnel-agent
subjects:
- kind: Group
name: system:nodes
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: yurt-tunnel-agent
apiGroup: rbac.authorization.k8s.io
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
Expand All @@ -59,7 +22,7 @@ spec:
- yurt-tunnel-agent
args:
- --node-name=$(NODE_NAME)
- --node-ip=$(NODE_IP)
- --v=2
image: openyurt/yurt-tunnel-agent:latest
imagePullPolicy: IfNotPresent
name: yurt-tunnel-agent
Expand All @@ -75,6 +38,10 @@ spec:
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: NODE_IP
valueFrom:
fieldRef:
Expand Down
9 changes: 4 additions & 5 deletions config/setup/yurt-tunnel-server.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -69,7 +69,7 @@ metadata:
labels:
name: yurt-tunnel-server
spec:
type: NodePort
type: NodePort
ports:
- port: 10263
targetPort: 10263
Expand Down Expand Up @@ -114,9 +114,7 @@ spec:
path: /var/lib/yurttunnel-server
type: DirectoryOrCreate
tolerations:
- key: "node-role.alibabacloud.com/addon"
operator: "Exists"
effect: "NoSchedule"
- operator: "Exists"
nodeSelector:
beta.kubernetes.io/arch: amd64
beta.kubernetes.io/os: linux
Expand All @@ -130,11 +128,12 @@ spec:
args:
- --bind-address=$(NODE_IP)
- --proxy-strategy=destHost
- --v=2
env:
- name: NODE_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
fieldPath: status.hostIP
securityContext:
capabilities:
add: ["NET_ADMIN", "NET_RAW"]
Expand Down
4 changes: 2 additions & 2 deletions config/setup/yurthub.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -31,15 +31,15 @@ spec:
- name: pem-dir
mountPath: /var/lib/kubelet/pki
command:
- yurthub
- yurthub
- --v=2
- --server-addr=https://__kubernetes_service_host__:__kubernetes_service_port_https__
- --node-name=$(NODE_NAME)
livenessProbe:
httpGet:
host: 127.0.0.1
path: /v1/healthz
port: 10261
port: 10267
initialDelaySeconds: 300
periodSeconds: 5
failureThreshold: 3
Expand Down
90 changes: 90 additions & 0 deletions config/yaml-template/yurt-controller-manager.yaml
Original file line number Diff line number Diff line change
@@ -1,3 +1,91 @@
apiVersion: v1
kind: ServiceAccount
metadata:
name: __project_prefix__-controller-manager
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
name: __project_prefix__-controller-manager
rules:
- apiGroups:
- ""
resources:
- nodes
verbs:
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- ""
resources:
- nodes/status
verbs:
- patch
- update
- apiGroups:
- ""
resources:
- pods/status
verbs:
- update
- apiGroups:
- ""
resources:
- pods
verbs:
- delete
- list
- watch
- apiGroups:
- ""
- events.k8s.io
resources:
- events
verbs:
- create
- patch
- update
- apiGroups:
- coordination.k8s.io
resources:
- leases
verbs:
- create
- delete
- get
- patch
- update
- list
- watch
- apiGroups:
- ""
- apps
resources:
- daemonsets
verbs:
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: __project_prefix__-controller-manager
subjects:
- kind: ServiceAccount
name: __project_prefix__-controller-manager
namespace: kube-system
roleRef:
kind: ClusterRole
name: __project_prefix__-controller-manager
apiGroup: rbac.authorization.k8s.io
---
apiVersion: apps/v1
kind: Deployment
metadata:
Expand All @@ -13,6 +101,8 @@ spec:
labels:
app: __project_prefix__-controller-manager
spec:
serviceAccountName: __project_prefix__-controller-manager
hostNetwork: true
affinity:
nodeAffinity:
# we prefer allocating ecm on cloud node
Expand Down
42 changes: 5 additions & 37 deletions config/yaml-template/yurt-tunnel-agent.yaml
Original file line number Diff line number Diff line change
@@ -1,40 +1,3 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
name: __project_prefix__-tunnel-agent
rules:
- apiGroups:
- ""
resources:
- nodes/stats
- nodes/metrics
- nodes/log
- nodes/spec
- nodes/proxy
verbs:
- create
- get
- list
- watch
- delete
- update
- patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: __project_prefix__-tunnel-agent
subjects:
- kind: Group
name: system:nodes
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: __project_prefix__-tunnel-agent
apiGroup: rbac.authorization.k8s.io
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
Expand All @@ -59,6 +22,7 @@ spec:
- __project_prefix__-tunnel-agent
args:
- --node-name=$(NODE_NAME)
- --v=2
image: __repo__/__project_prefix__-tunnel-agent:__tag__
imagePullPolicy: IfNotPresent
name: __project_prefix__-tunnel-agent
Expand All @@ -78,6 +42,10 @@ spec:
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: NODE_IP
valueFrom:
fieldRef:
fieldPath: status.hostIP
hostNetwork: true
restartPolicy: Always
volumes:
Expand Down
10 changes: 5 additions & 5 deletions config/yaml-template/yurt-tunnel-server.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -69,7 +69,7 @@ metadata:
labels:
name: __project_prefix__-tunnel-server
spec:
type: NodePort
type: NodePort
ports:
- port: 10263
targetPort: 10263
Expand Down Expand Up @@ -114,9 +114,7 @@ spec:
path: /var/lib/yurttunnel-server
type: DirectoryOrCreate
tolerations:
- key: "node-role.alibabacloud.com/addon"
operator: "Exists"
effect: "NoSchedule"
- operator: "Exists"
nodeSelector:
beta.kubernetes.io/arch: amd64
beta.kubernetes.io/os: linux
Expand All @@ -129,11 +127,13 @@ spec:
- __project_prefix__-tunnel-server
args:
- --bind-address=$(NODE_IP)
- --proxy-strategy=destHost
- --v=2
env:
- name: NODE_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
fieldPath: status.hostIP
securityContext:
capabilities:
add: ["NET_ADMIN", "NET_RAW"]
Expand Down
38 changes: 22 additions & 16 deletions docs/tutorial/manually-setup.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ at `config/setup/`.

When disconnected from the apiserver, only the pod running on the autonomous edge node will
be prevented from being evicted from nodes. Therefore, we first need to divide nodes into two categories, the cloud node
and the edge node, by using label `alibabacloud.com/is-edge-worker`. Assume that the given Kubernetes cluster
and the edge node, by using label `openyurt.io/is-edge-worker`. Assume that the given Kubernetes cluster
has two nodes,
```bash
$ kubectl get nodes
Expand All @@ -20,13 +20,13 @@ and we will use node `us-west-1.192.168.0.87` as the cloud node.

We label the cloud node with value `false`,
```bash
$ kubectl label node us-west-1.192.168.0.87 alibabacloud.com/is-edge-worker=false
$ kubectl label node us-west-1.192.168.0.87 openyurt.io/is-edge-worker=false
node/us-west-1.192.168.0.87 labeled
```

and the edge node with value `true`.
```bash
$ kubectl label node us-west-1.192.168.0.88 alibabacloud.com/is-edge-worker=true
$ kubectl label node us-west-1.192.168.0.88 openyurt.io/is-edge-worker=true
node/us-west-1.192.168.0.88 labeled
```

Expand Down Expand Up @@ -95,22 +95,28 @@ Please refer to this [document](.//yurt-tunnel.md#5-setup-the-yurt-tunnel-manual
By now, we have setup all required components for the OpenYurt cluster, next, we only need to reset the
kubelet service to let it access the apiserver through the yurthub (The following steps assume that we are logged
in to the edge node as the root user).

To do so, we create a new kubeconfig file for the kubelet service based on the original one (i.e., `/etc/kubernetes/kubelet.confg`).
```bash
$ mkdir -p /var/lib/openyurt && cp /etc/kubernetes/kubelet.conf /var/lib/openyurt
```

As kubelet will connect to the Yurthub through http, we need to remove unnecessary field from the newly created kubeconfig file
As kubelet will connect to the Yurthub through http, so we create a new kubeconfig file for the kubelet service.
```bash
sed -i '/certificate-authority-data/d;
/client-key/d;
/client-certificate/d;
/user:/d;
s/ https.*/ http:\/\/127.0.0.1:10261/g' /var/lib/openyurt/kubelet.conf
mkdir -p /var/lib/openyurt
cat << EOF > /var/lib/openyurt/kubelet.conf
apiVersion: v1
clusters:
- cluster:
server: http://127.0.0.1:10261
name: default-cluster
contexts:
- context:
cluster: default-cluster
namespace: default
user: default-auth
name: default-context
current-context: default-context
kind: Config
preferences: {}
EOF
```

In order to let kubelet to use the revised kubeconfig, we edit the drop-in file of the kubelet
In order to let kubelet to use the new kubeconfig, we edit the drop-in file of the kubelet
service (i.e., `/etc/systemd/system/kubelet.service.d/10-kubeadm.conf`)
```bash
sed -i "s|KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=\/etc\/kubernetes\/bootstrap-kubelet.conf\ --kubeconfig=\/etc\/kubernetes\/kubelet.conf|KUBELET_KUBECONFIG_ARGS=--kubeconfig=\/var\/lib\/openyurt\/kubelet.conf|g" \
Expand Down
2 changes: 1 addition & 1 deletion docs/tutorial/yurt-tunnel.md
Original file line number Diff line number Diff line change
Expand Up @@ -110,7 +110,7 @@ Next, we can set up the yurt-tunnel-agent. Like before, we add a label to the
edge node, which allows the yurt-tunnel-agent to be run on the edge node:
```bash
kubectl label nodes minikube-m02 openyurt.io/edge-enable-reverseTunnel-client=true
kubectl label nodes minikube-m02 openyurt.io/is-edge-worker=true
```
And, apply the yurt-tunnel-agent yaml:
Expand Down
Loading

0 comments on commit c7c6023

Please sign in to comment.