diff --git a/docs/advanced/image-prepull.md b/docs/advanced/image-prepull.md new file mode 100644 index 0000000000..503ffdc749 --- /dev/null +++ b/docs/advanced/image-prepull.md @@ -0,0 +1,149 @@ +--- +title: KubeEdge Image PrePull Feature Guide Document +sidebar_position: 6 +--- + + +# KubeEdge Image PrePull Feature Guide Document + +KubeEdge version 1.16 introduces a new feature called Image Pre-Pull, which allows users to load images ahead of time on edge nodes through the Kubernetes API of ImagePrePullJob. This feature supports pre-pull multiple images in batches across multiple edge nodes or node groups, helping to reduce the failure rates and inefficiencies associated with loading images during application deployment or updates, especially in large-scale scenarios. + +API example for ImagePrePullJob: + +``` +apiVersion: operations.kubeedge.io/v1alpha1 +kind: ImagePrePullJob +metadata: + name: imageprepull-example + labels: + description: ImagePrePullLabel +spec: + imagePrePullTemplate: + images: + - image1 + - image2 + nodes: + - edgenode1 + - edgenode2 + checkItems: + - "disk" + failureTolerate: "0.3" + concurrency: 2 + timeoutSeconds: 180 + retryTimes: 1 + +``` + + +## 1. Preparation + +**Example:Nginx Demo** + +Nginx is a lightweight image that allows users to demonstrate it without any prerequisite environment. Nginx image will be uploaded to a private image repository in advance. Users can call the pre-pull function API from the cloud to pre-pull the Nginx image to edge nodes from the private image repository. + +**1)This example requires KubeEdge version to be v1.16.0 or above, and Kubernetes version to be v1.27.0 or above. The selected version is KubeEdge v1.16.0 and Kubernetes version is v1.27.3.** + +``` +[root@ke-cloud ~]# kubectl get node +NAME STATUS ROLES AGE VERSION +cloud.kubeedge Ready control-plane,master 3d v1.27.3 +edge.kubeedge Ready agent,edge 2d v1.27.7-kubeedge-v1.16.0 + +Note: The following operations will use the edge node edge.kubeedge. If you refer to this document for related operations, the configuration of the edge node name in subsequent steps needs to be changed according to your actual situation. +``` + +**2)Ensure that the CloudCore has the following configuration enabled** + + +``` + taskManager: + enable: true // Change from false to true +``` +changes can be made by editing the file kubectl edit configmap cloudcore -n kubeedge with commands, and restarting the cloudcore component of the K8s apiserver. + + + + +## 2. Prepare the Secret for the privare image (optional) +Here is a private image repository prepared for demonstration purposes using Alibaba Cloud's registry URL: registry.cn-hangzhou.aliyuncs.com. The demo space used is jilimoxing, and modifications may be necessary based on actual circumstances during the actual operation. + +**1)Pushing nginx into the private image repository** + +``` +[root@cloud ~]# docker tag nginx registry.cn-hangzhou.aliyuncs.com/jilimoxing/test:nginx +[root@cloud crds~]# docker push registry.cn-hangzhou.aliyuncs.com/jilimoxing/test:nginx +``` + +**2)Create a Secret on the cloud** +Secret is not a required field in ImagePrePullJob. If you need to prepull private image, you can generate a secret for it. +You can also use kubectl to create a Secret for accessing a container registry,such as when you don`t have a Docker configuration file: + +``` +[root@cloud ~]# kubectl create secret docker-registry my-secret \ + --docker-server=tiger@acme.example \ + --docker-username=tiger \ + --docker-password=pass1234 \ + --docker-email=my-registry.example:5000 + +[root@cloud ~]# kubectl get secret -A +NAMESPACE NAME TYPE DATA AGE +default my-secret kubernetes.io/dockerconfigjson 1 31s + +``` + +## 3. Create Yaml File + +**1)Modify Code** + +To create a yaml file on a cloud node, you need to modify the corresponding images information and imageSecrets information to keep them consistent with the pre-pull image repository secret. The information should be as follows: +``` + +[root@ke-cloud ~]# vim imageprepull.yaml + +apiVersion: operations.kubeedge.io/v1alpha1 +kind: ImagePrePullJob +metadata: + name: imageprepull-example +spec: + imagePrePullTemplate: + concurrency: 1 + failureTolerate: '0.1' + images: + - registry.cn-hangzhou.aliyuncs.com/jilimoxing/test:nginx + nodeNames: + - edge.kubeedge + imageSecrets: default/my-secret + retryTimes: 1 + timeoutSeconds: 120 + +``` + +**2)executable file** + + +``` +[root@ke-cloud ~]# kubectl apply -f imageprepull.yaml +``` + + +**3) Get ImagePrepulljob Status** + +use:kubectl get imageprepulljobs.operations.kubeedge.io imageprepull-example -o jsonpath='{.status}' + +``` +[root@ke-cloud ~]# kubectl get imageprepulljobs.operations.kubeedge.io imageprepull-example -o jsonpath='{.status}' +[root@ke-cloud ~]# {"action":"Success","event":"Pull","state":"Successful","status":[{"imageStatus":[{"image":"registry.cn-hangzhou.aliyuncs.com/jilimoxing/test:nginx","state":"Successful"}],"nodeStatus":{"action":"Success","event":"Pull","nodeName":"edge.kubeedge","state":"Successful","time":"2024-04-26T18:51:41Z"}}],"time":"2024-04-26T18:51:41Z"} +``` + +## 4. Check if the edge node image has been pre-pull successfully + +Enter the edge terminal and use the command ctr -n k8s.io i ls to view. +``` +[root@edge ~]# ctr -n k8s.io i ls +``` +The corresponding image has been successfully pre-pull. +``` +REF TYPE DIGEST SIZE PLATFORMS LABELS +registry.cn-hangzhou.aliyuncs.com/jilimoxing/test:nginx application/vnd.docker.distribution.manifest.v2+json sha256:73e957703f1266530db0aeac1fd6a3f87c1e59943f4c13eb340bb8521c6041d7 67.3 MiB linux/amd64 +``` + diff --git a/docs/setup/install-with-keadm.md b/docs/setup/install-with-keadm.md index f69e90e316..3967edbc81 100644 --- a/docs/setup/install-with-keadm.md +++ b/docs/setup/install-with-keadm.md @@ -2,51 +2,54 @@ title: Installing KubeEdge with Keadm sidebar_position: 3 --- -Keadm is used to install the cloud and edge components of KubeEdge. It is not responsible for installing K8s and runtime. -Please refer [kubernetes-compatibility](https://github.com/kubeedge/kubeedge#kubernetes-compatibility) to get **Kubernetes compatibility** and determine what version of Kubernetes would be installed. +Keadm is used to install the cloud and edge components of KubeEdge. It does not handle the installation of Kubernetes and its runtime environment. -## Limitation +Please refer to [Kubernetes compatibility](https://github.com/kubeedge/kubeedge#kubernetes-compatibility) documentation to check **Kubernetes compatibility** and ascertain the Kubernetes version to be installed. -- Need super user rights (or root rights) to run. +## Limitation +- It Requires super user rights (or root rights) to run. ## Install keadm -There're three ways to download a `keadm` binary +There're three ways to download the `keadm` binary: -- Download from [github release](https://github.com/kubeedge/kubeedge/releases). +1. Download from [GitHub release](https://github.com/kubeedge/kubeedge/releases). - Now KubeEdge github officially holds three arch releases: amd64, arm, arm64. Please download the right arch package according to your platform, with your expected version. + KubeEdge GitHub officially holds three architecture releases: amd64, arm, and arm64. Please download the correct package according to your platform and desired version. + ```shell wget https://github.com/kubeedge/kubeedge/releases/download/v1.12.1/keadm-v1.12.1-linux-amd64.tar.gz tar -zxvf keadm-v1.12.1-linux-amd64.tar.gz cp keadm-v1.12.1-linux-amd64/keadm/keadm /usr/local/bin/keadm ``` -- Download from dockerhub KubeEdge official release image. + +2. Download from the official KubeEdge release image on Docker Hub. ```shell docker run --rm kubeedge/installation-package:v1.12.1 cat /usr/local/bin/keadm > /usr/local/bin/keadm && chmod +x /usr/local/bin/keadm ``` -- Build from source +3. Build from Source - ref: [build from source](./install-with-binary#build-from-source) - +- Refer to [build from source](./install-with-binary#build-from-source) for instructions. ## Setup Cloud Side (KubeEdge Master Node) -By default ports `10000` and `10002` in your cloudcore needs to be accessible for your edge nodes. +By default, ports `10000` and `10002` on your CloudCore needs to be accessible for your edge nodes. + +**IMPORTANT NOTES:** -**IMPORTANT NOTE:** +1. At least one of `kubeconfig` or `master` must be configured correctly to verify the version and other information of the Kubernetes cluster. -1. At least one of kubeconfig or master must be configured correctly, so that it can be used to verify the version and other info of the k8s cluster. -2. Please make sure edge node can connect cloud node using local IP of cloud node, or you need to specify public IP of cloud node with `--advertise-address` flag. -3. `--advertise-address` is the address exposed by the cloud side (will be added to the SANs of the CloudCore certificate), the default value is the local IP. +2. Ensure the edge node can connect to the cloud node using the local IP of cloud node, or specify the public IP of the cloud node with the `--advertise-address` flag. + +3. `--advertise-address` is the address exposed by the cloud side (it will be added to the SANs of the CloudCore certificate). The default value is the local IP. ### keadm init -`keadm init` provides a solution for integrating Cloudcore helm chart. Cloudcore will be deployed to cloud nodes in container mode. +`keadm init` provides a solution for integrating the Cloudcore Helm chart. Cloudcore will be deployed to cloud nodes in container mode. Example: @@ -55,6 +58,7 @@ keadm init --advertise-address="THE-EXPOSED-IP" --profile version=v1.12.1 --kube ``` Output: + ```shell Kubernetes version verification passed, KubeEdge installation will start... CLOUDCORE started @@ -66,7 +70,8 @@ STATUS: deployed REVISION: 1 ``` -You can run `kubectl get all -n kubeedge` to ensure that cloudcore start successfully just like below. +You can run `kubectl get all -n kubeedge` to ensure that Cloudcore start successfully, as shown below. + ```shell # kubectl get all -n kubeedge NAME READY STATUS RESTARTS AGE @@ -82,11 +87,13 @@ NAME DESIRED CURRENT READY AGE replicaset.apps/cloudcore-56b8454784 1 1 1 46s ``` -**IMPORTANT NOTE:** +**IMPORTANT NOTES:** 1. Set flags `--set key=value` for cloudcore helm chart could refer to [KubeEdge Cloudcore Helm Charts README.md](https://github.com/kubeedge/kubeedge/blob/master/manifests/charts/cloudcore/README.md). + 2. You can start with one of Keadm’s built-in configuration profiles and then further customize the configuration for your specific needs. Currently, the built-in configuration profile keyword is `version`. Refer to [version.yaml](https://github.com/kubeedge/kubeedge/blob/master/manifests/profiles/version.yaml) as `values.yaml`, you can make your custom values file here, and add flags like `--profile version=v1.9.0 --set key=value` to use this profile. `--external-helm-root` flag provides a feature function to install the external helm charts like edgemesh. -3. `keadm init` deploy cloudcore in container mode, if you want to deploy cloudcore as binary, please ref [`keadm deprecated init`](#keadm-deprecated-init) below. + +3. `keadm init` by default, deploys Cloudcore in container mode. If you want to deploy Cloudcore as a binary, please refer to [`keadm deprecated init`](#keadm-deprecated-init). Example: @@ -94,29 +101,42 @@ Example: keadm init --set server.advertiseAddress="THE-EXPOSED-IP" --set server.nodeName=allinone --kube-config=/root/.kube/config --force --external-helm-root=/root/go/src/github.com/edgemesh/build/helm --profile=edgemesh ``` -If you are familiar with the helm chart installation, please refer to [KubeEdge Helm Charts](https://github.com/kubeedge/kubeedge/tree/master/manifests/charts). +If you are familiar with the Helm chart installation, please refer to [KubeEdge Helm Charts](https://github.com/kubeedge/kubeedge/tree/master/manifests/charts). + + +**SPECIAL SCENARIO:** +In the case of insufficient qualifications for edge nodes, we need to label them to prevent some applications from extending to edge nodes. `Kube-proxy` and some others is not required at the edge.We can handle it accordingly. + +``` +kubectl get daemonset -n kube-system |grep -v NAME |awk '{print $1}' | xargs -n 1 kubectl patch daemonset -n kube-system --type='json' -p='[{"op": "replace","path": "/spec/template/spec/affinity","value":{"nodeAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":{"nodeSelectorTerms":[{"matchExpressions":[{"key":"node-role.kubernetes.io/edge","operator":"DoesNotExist"}]}]}}}}]' + +``` +To handle kube-proxy, you can refer to the [two methods](#anchor-name) mentioned in the " Enable `kubectl logs` Feature " section of this document. ### keadm manifest generate -You can also get the manifests with `keadm manifest generate`. +You can generate the manifests using `keadm manifest generate`. Example: ```shell keadm manifest generate --advertise-address="THE-EXPOSED-IP" --kube-config=/root/.kube/config > kubeedge-cloudcore.yaml ``` -> Add --skip-crds flag to skip outputing the CRDs + +> Add `--skip-crds` flag to skip outputting the CRDs. ### keadm deprecated init -`keadm deprecated init` will install cloudcore in binary process, generate the certs and install the CRDs. It also provides a flag by which a specific version can be set. +`keadm deprecated init` installs Cloudcore in binary process, generates certificates, and installs the CRDs. It also provides a flag to set a specific version. + +**IMPORTANT NOTES:** -**IMPORTANT NOTE:** +1. At least one of `kubeconfig` or `master` must be configured correctly to verify the version and other information of the Kubernetes cluster. -1. At least one of kubeconfig or master must be configured correctly, so that it can be used to verify the version and other info of the k8s cluster. -2. Please make sure edge node can connect cloud node using local IP of cloud node, or you need to specify public IP of cloud node with `--advertise-address` flag. -3. `--advertise-address` is the address exposed by the cloud side (will be added to the SANs of the CloudCore certificate), the default value is the local IP. +2. Ensure the edge node can connect to the cloud node using the local IP of cloud node, or specify the public IP of the cloud node with the `--advertise-address` flag. + +3. `--advertise-address` is the address exposed by the cloud side (it will be added to the SANs of the CloudCore certificate). The default value is the local IP. Example: ```shell @@ -131,7 +151,8 @@ keadm manifest generate --advertise-address="THE-EXPOSED-IP" --kube-config=/root CloudCore started ``` - You can run `ps -elf | grep cloudcore` command to ensure that cloudcore is running successfully. + You can run the `ps -elf | grep cloudcore` command to ensure that Cloudcore is running successfully. + ```shell # ps -elf | grep cloudcore 0 S root 2736434 1 1 80 0 - 336281 futex_ 11:02 pts/2 00:00:00 /usr/local/bin/cloudcore @@ -142,7 +163,7 @@ keadm manifest generate --advertise-address="THE-EXPOSED-IP" --kube-config=/root ### Get Token From Cloud Side -Run `keadm gettoken` in **cloud side** will return the token, which will be used when joining edge nodes. +Run `keadm gettoken` on the **cloud side** to retrieve the token, which will be used when joining edge nodes. ```shell # keadm gettoken @@ -152,7 +173,8 @@ Run `keadm gettoken` in **cloud side** will return the token, which will be used ### Join Edge Node #### keadm join -`keadm join` will install edgecore. It also provides a flag by which a specific version can be set. It will pull image [kubeedge/installation-package](https://hub.docker.com/r/kubeedge/installation-package) from dockerhub and copy binary `edgecore` from container to hostpath, and then start `edgecore` as a system service. + +`keadm join` installs Edgecore. It also provides a flag to set a specific version. It pulls the image [kubeedge/installation-package](https://hub.docker.com/r/kubeedge/installation-package) from Docker Hub, copies the `edgecore` binary from container to the hostpath, and then starts `edgecore` as a system service. Example: @@ -160,10 +182,13 @@ Example: keadm join --cloudcore-ipport="THE-EXPOSED-IP":10000 --token=27a37ef16159f7d3be8fae95d588b79b3adaaf92727b72659eb89758c66ffda2.eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJleHAiOjE1OTAyMTYwNzd9.JBj8LLYWXwbbvHKffJBpPd5CyxqapRQYDIXtFZErgYE --kubeedge-version=v1.12.1 ``` -**IMPORTANT NOTE:** -1. `--cloudcore-ipport` flag is a mandatory flag. -2. If you want to apply certificate for edge node automatically, `--token` is needed. -3. The kubeEdge version used in cloud and edge side should be same. +**IMPORTANT NOTES:** + +1. The `--cloudcore-ipport` flag is mandatory. + +2. If you want to apply certificate for the edge node automatically, the `--token` is needed. + +3. The KubeEdge version used on the cloud and edge sides should be the same. Output: @@ -172,7 +197,8 @@ Output: KubeEdge edgecore is running, For logs visit: journalctl -u edgecore.service -xe ``` -you can run `systemctl status edgecore` command to ensure edgecore is running successfully +You can run the `systemctl status edgecore` command to ensure Edgecore is running successfully: + ```shell # systemctl status edgecore ● edgecore.service @@ -185,14 +211,17 @@ you can run `systemctl status edgecore` command to ensure edgecore is running su ``` #### keadm deprecated join -You can also use `keadm deprecated join` to start edgecore from release pacakge. It will download release packages from [KubeEdge release website](https://github.com/kubeedge/kubeedge/releases), and then start `edgecore` in binary progress. + +You can also use `keadm deprecated join` to start Edgecore from the release pacakge. It will download release packages from [KubeEdge release website](https://github.com/kubeedge/kubeedge/releases), and then start `edgecore` in binary progress. Example: + ```shell keadm deprecated join --cloudcore-ipport="THE-EXPOSED-IP":10000 --token=27a37ef16159f7d3be8fae95d588b79b3adaaf92727b72659eb89758c66ffda2.eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJleHAiOjE1OTAyMTYwNzd9.JBj8LLYWXwbbvHKffJBpPd5CyxqapRQYDIXtFZErgYE --kubeedge-version=1.12.0 ``` Output: + ```shell MQTT is installed in this host ... @@ -200,59 +229,63 @@ KubeEdge edgecore is running, For logs visit: journalctl -u edgecore.service -xe ``` ### Deploy demo on edge nodes -ref: [Deploy demo on edge nodes](./install-with-binary#deploy-demo-on-edge-nodes) + +Refer to the [Deploy demo on edge nodes](./install-with-binary#deploy-demo-on-edge-nodes) documentation. ### Enable `kubectl logs` Feature -Before deploying metrics-server , `kubectl logs` feature must be activated: +Before deploying the metrics-server, the `kubectl logs` feature must be activated: -> Note that if cloudcore is deployed using helm: -> - The stream certs are generated automatically and cloudStream feature is enabled by default. So, step 1-3 could - be skipped unless customization is needed. -> - Also, step 4 could be finished by iptablesmanager component by default, manually operations are not needed. - Refer to the [cloudcore helm values](https://github.com/kubeedge/kubeedge/blob/master/manifests/charts/cloudcore/values.yaml#L67). -> - Operations in step 5-6 related to cloudcore could also be skipped. +> Note for Helm deployments: +> - Stream certificates are generated automatically and the CloudStream feature is enabled by default. Therefore, Steps 1-3 can be skipped unless customization is needed. +> - Step 4 could be finished by iptablesmanager component by default, so manual operations are not needed. Refer to the [cloudcore helm values](https://github.com/kubeedge/kubeedge/blob/master/manifests/charts/cloudcore/values.yaml#L67). +> - Operations in Steps 5-6 related to Cloudcore can also be skipped. -1. Make sure you can find the kubernetes `ca.crt` and `ca.key` files. If you set up your kubernetes cluster by `kubeadm` , those files will be in `/etc/kubernetes/pki/` dir. +1. Ensure you can locate the Kubernetes `ca.crt` and `ca.key` files. If you set up your Kubernetes cluster with `kubeadm`, these files will be in the `/etc/kubernetes/pki/` directory. ``` shell ls /etc/kubernetes/pki/ ``` -2. Set `CLOUDCOREIPS` env. The environment variable is set to specify the IP address of cloudcore, or a VIP if you have a highly available cluster. - Set `CLOUDCORE_DOMAINS` instead if Kubernetes uses domain names to communicate with cloudcore. +2. Set the `CLOUDCOREIPS` environment variable to specify the IP address of Cloudcore, or a VIP if you have a highly available cluster. Set `CLOUDCORE_DOMAINS` instead if Kubernetes uses domain names to communicate with Cloudcore. ```bash export CLOUDCOREIPS="192.168.0.139" ``` - (Warning: the same **terminal** is essential to continue the work, or it is necessary to type this command again.) Checking the environment variable with the following command: + + (Warning: the same **terminal** is essential to continue the work, or it is necessary to type this command again). You can check the environment variable with the following command: + ``` shell echo $CLOUDCOREIPS ``` -3. Generate the certificates for **CloudStream** on cloud node, however, the generation file is not in the `/etc/kubeedge/`, we need to copy it from the repository which was git cloned from GitHub. - Change user to root: +3. Generate the certificates for **CloudStream** on the cloud node. The generation file is not in `/etc/kubeedge/`, so it needs to be copied from the repository cloned from GitHub. Switch to the root user: + ```shell sudo su ``` - Copy certificates generation file from original cloned repository: + + Copy the certificate generation file from the original cloned repository: + ```shell cp $GOPATH/src/github.com/kubeedge/kubeedge/build/tools/certgen.sh /etc/kubeedge/ ``` + Change directory to the kubeedge directory: + ```shell cd /etc/kubeedge/ ``` + Generate certificates from **certgen.sh** ```bash /etc/kubeedge/certgen.sh stream ``` -4. It is needed to set iptables on the host. (This command should be executed on every apiserver deployed node.)(In this case, this the master node, and execute this command by root.) - Run the following command on the host on which each apiserver runs: +4. It is needed to set iptables on the host. (This command should be executed on every apiserver deployed node.)(In this case, this the master node, and execute this command by root.) Run the following command on the host where each apiserver runs: - **Note:** You need to get the configmap first, which contains all the cloudcore ips and tunnel ports. + **Note:** First, get the configmap containing all the Cloudcore IPs and tunnel ports: ```bash kubectl get cm tunnelport -nkubeedge -oyaml @@ -266,7 +299,7 @@ Before deploying metrics-server , `kubectl logs` feature must be activated: ... ``` - Then set all the iptables for multi cloudcore instances to every node that apiserver runs. The cloudcore ips and tunnel ports should be get from configmap above. + Then set all the iptables for multi cloudcore instances to every node that apiserver runs. The cloudcore ips and tunnel ports should be obtained from the configmap above. ```bash iptables -t nat -A OUTPUT -p tcp --dport $YOUR-TUNNEL-PORT -j DNAT --to $YOUR-CLOUDCORE-IP:10003 @@ -274,22 +307,24 @@ Before deploying metrics-server , `kubectl logs` feature must be activated: iptables -t nat -A OUTPUT -p tcp --dport 10351 -j DNAT --to 192.168.1.17:10003 ``` - If you are not sure if you have setting of iptables, and you want to clean all of them. - (If you set up iptables wrongly, it will block you out of your `kubectl logs` feature) + If you are unsure about the current iptables settings and want to clean all of them. (If you set up iptables wrongly, it will block you out of your `kubectl logs` feature) + The following command can be used to clean up iptables: + ``` shell iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X ``` - 5. Modify **both** `/etc/kubeedge/config/cloudcore.yaml` and `/etc/kubeedge/config/edgecore.yaml` on cloudcore and edgecore. Set up **cloudStream** and **edgeStream** to `enable: true`. Change the server IP to the cloudcore IP (the same as $CLOUDCOREIPS). Open the YAML file in cloudcore: + ```shell sudo nano /etc/kubeedge/config/cloudcore.yaml ``` Modify the file in the following part (`enable: true`): + ```yaml cloudStream: enable: true @@ -304,10 +339,13 @@ Before deploying metrics-server , `kubectl logs` feature must be activated: ``` Open the YAML file in edgecore: + ``` shell sudo nano /etc/kubeedge/config/edgecore.yaml ``` + Modify the file in the following part (`enable: true`), (`server: 192.168.0.193:10004`): + ``` yaml edgeStream: enable: true @@ -325,24 +363,32 @@ Before deploying metrics-server , `kubectl logs` feature must be activated: ``` shell sudo su ``` - cloudCore in process mode: + + If CloudCore is running in process mode: + ``` shell pkill cloudcore nohup cloudcore > cloudcore.log 2>&1 & ``` - or cloudCore in kubernetes deployment mode: + + If CloudCore is running in Kubernetes deployment mode: + ``` shell kubectl -n kubeedge rollout restart deployment cloudcore ``` - edgeCore: + + EdgeCore: + ``` shell systemctl restart edgecore.service ``` - If you fail to restart edgecore, check if that is because of `kube-proxy` and kill it. **kubeedge** reject it by default, we use a succedaneum called [edgemesh](https://github.com/kubeedge/kubeedge/blob/master/docs/proposals/edgemesh-design.md) - **Note:** the importance is to avoid `kube-proxy` being deployed on edgenode. There are two methods to solve it: + **Note:** the importance is to avoid `kube-proxy` being deployed on edgenode. There are two methods to solve it: - 1. Add the following settings by calling `kubectl edit daemonsets.apps -n kube-system kube-proxy`: + **Note:** It is important to avoid `kube-proxy` being deployed on edgenode and there are two methods to achieve this: + + - **Method 1:** Add the following settings by calling `kubectl edit daemonsets.apps -n kube-system kube-proxy`: + ``` yaml spec: template: @@ -355,24 +401,26 @@ Before deploying metrics-server , `kubectl logs` feature must be activated: - key: node-role.kubernetes.io/edge operator: DoesNotExist ``` - or just run the below command directly in the shell window: + + or just run the following command directly in the shell window: + ```shell kubectl patch daemonset kube-proxy -n kube-system -p '{"spec": {"template": {"spec": {"affinity": {"nodeAffinity": {"requiredDuringSchedulingIgnoredDuringExecution": {"nodeSelectorTerms": [{"matchExpressions": [{"key": "node-role.kubernetes.io/edge", "operator": "DoesNotExist"}]}]}}}}}}}' ``` - 2. If you still want to run `kube-proxy`, ask **edgecore** not to check the environment by adding the env variable in `edgecore.service` : + - **Method 2:** If you still want to run `kube-proxy`, instruct **edgecore** not to check the environment by adding the environment variable in `edgecore.service` : ``` shell sudo vi /etc/kubeedge/edgecore.service ``` - - Add the following line into the **edgecore.service** file: + Add the following line into the **edgecore.service** file: ``` shell Environment="CHECK_EDGECORE_ENVIRONMENT=false" ``` - - The final file should look like this: + The final file should look like this: ``` Description=edgecore.service @@ -387,6 +435,7 @@ Before deploying metrics-server , `kubectl logs` feature must be activated: ``` ### Support Metrics-server in Cloud + 1. The realization of this function point reuses cloudstream and edgestream modules. So you also need to perform all steps of *Enable `kubectl logs` Feature*. 2. Since the kubelet ports of edge nodes and cloud nodes are not the same, the current release version of metrics-server(0.3.x) does not support automatic port identification (It is the 0.4.0 feature), so you need to manually compile the image from master branch yourself now. @@ -458,7 +507,8 @@ Before deploying metrics-server , `kubectl logs` feature must be activated: - charlie-latest ``` -**IMPORTANT NOTE:** +**IMPORTANT NOTES:** + 1. Metrics-server needs to use hostnetwork network mode. 2. Use the image compiled by yourself and set imagePullPolicy to Never. @@ -507,4 +557,5 @@ It provides a flag for users to specify kubeconfig path, the default path is `/r ``` ### Node + `keadm reset` or `keadm deprecated reset` will stop `edgecore` and it doesn't uninstall/remove any of the pre-requisites. diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/advanced/image-prepull.md b/i18n/zh/docusaurus-plugin-content-docs/current/advanced/image-prepull.md new file mode 100644 index 0000000000..5305f0275e --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/current/advanced/image-prepull.md @@ -0,0 +1,142 @@ +# KubeEdge 镜像预加载功能指导文档 + +KubeEdge 1.16版本引入了镜像预下载新特性,用户可以通过ImagePrePullJob的Kubernetes API提前在边缘节点上加载镜像,该特性支持在批量边缘节点或节点组中预下载多个镜像,帮助减少加载镜像在应用部署或更新过程,尤其是大规模场景中,带来的失败率高、效率低下等问题。 + +镜像预下载API示例: + +``` +apiVersion: operations.kubeedge.io/v1alpha1 +kind: ImagePrePullJob +metadata: + name: imageprepull-example + labels: + description:ImagePrePullLabel +spec: + imagePrePullTemplate: + images: + - image1 + - image2 + nodes: + - edgenode1 + - edgenode2 + checkItems: + - "disk" + failureTolerate: "0.3" + concurrency: 2 + timeoutSeconds: 180 + retryTimes: 1 + +``` + + +## 1. 准备工作 + +**选用示例:Nginx Demo** + +nginx是一个轻量级镜像,用户无需任何环境即可进行此演示。nginx镜像将会提前上传到一个私有镜像仓库中。用户可以从云端调用预加载功能API,将私有镜像仓库中的nginx镜像,提前下发到边缘节点中。 + + +**1)本示例要求KubeEdge版本必须是v1.16.0+,kubernetes版本是v1.27.0+,此次选择的版本是KubeEdge v1.16.0,Kubernetes版本是v1.27.3** + +``` +[root@ke-cloud ~]# kubectl get node +NAME STATUS ROLES AGE VERSION +cloud.kubeedge Ready control-plane,master 3d v1.27.3 +edge.kubeedge Ready agent,edge 2d v1.27.7-kubeedge-v1.16.0 + +说明:本文接下来的操作将使用边缘节点edge.kubeedge进行,如果你参考本文进行相关操作,后续边缘节点名称的配置需要根据你的实际情况进行更改。 +``` + +**2)确保CloudCore开启了以下配置:** + + +``` + taskManager: + enable: true // 由false修改为true +``` +可以通过命令修改kubectl edit configmap cloudcore -n kubeedge文件,并重启k8s-apiserver组件的cloudcore来进行更改。 + + + + +## 2. 为私有镜像准备密钥 +在这里准备了一个阿里云的私有镜像仓用作演示:registry.cn-hangzhou.aliyuncs.com/,使用的演示空间为jilimoxing。实际操作过程中可以依据真实情况进行修改 + +**1)推送nginx进入私有镜像仓** +``` +[root@cloud ~]# docker tag nginx registry.cn-hangzhou.aliyuncs.com/jilimoxing/test:nginx +[root@cloud crds~]# docker push registry.cn-hangzhou.aliyuncs.com/jilimoxing/test:nginx +``` + +**2)在云端创建Secret** +Secret 不是 ImagePrePullJob 中的必填字段。如果你需要预拉私有镜像,你可以为它生成一个密钥。 +您还可以使用kubectl创建一个用于访问docker registry的Secret,例如在没有Docker配置文件的情况下: + +``` +[root@cloud ~]# kubectl create secret docker-registry my-secret \ + --docker-server=tiger@acme.example \ + --docker-username=tiger \ + --docker-password=pass1234 \ + --docker-email=my-registry.example:5000 + +[root@cloud ~]# kubectl get secret -A +NAMESPACE NAME TYPE DATA AGE +default my-secret kubernetes.io/dockerconfigjson 1 31s + +``` + +## 3. 创建Yaml文件 + +**1)修改代码** + +在云端节点上创建yaml文件,需要修改对应的images信息以及imageSecrets信息,保持和所需要预加载的镜像仓库secret一致,如下所示: +``` + +[root@ke-cloud ~]# vim imageprepull.yaml + +apiVersion: operations.kubeedge.io/v1alpha1 +kind: ImagePrePullJob +metadata: + name: imageprepull-example +spec: + imagePrePullTemplate: + concurrency: 1 + failureTolerate: '0.1' + images: + - registry.cn-hangzhou.aliyuncs.com/jilimoxing/test:nginx + nodeNames: + - edge.kubeedge + imageSecrets: default/my-secret + retryTimes: 1 + timeoutSeconds: 120 + +``` + +**2)执行文件** + + +``` +[root@ke-cloud ~]# kubectl apply -f imageprepull.yaml +``` + +**3) 获取 ImagePrepulljob 的状态** + +使用命令:kubectl get imageprepulljobs.operations.kubeedge.io imageprepull-example -o jsonpath='{.status}'进行查看 + +``` +[root@ke-cloud ~]# kubectl get imageprepulljobs.operations.kubeedge.io imageprepull-example -o jsonpath='{.status}' +[root@ke-cloud ~]# {"action":"Success","event":"Pull","state":"Successful","status":[{"imageStatus":[{"image":"registry.cn-hangzhou.aliyuncs.com/jilimoxing/test:nginx","state":"Successful"}],"nodeStatus":{"action":"Success","event":"Pull","nodeName":"edge.kubeedge","state":"Successful","time":"2024-04-26T18:51:41Z"}}],"time":"2024-04-26T18:51:41Z"} +``` + + +## 4. 检查边缘节点镜像是否预加载成功 + +进入边缘端,使用命令ctr -n k8s.io i ls进行查看 +``` +[root@edge ~]# ctr -n k8s.io i ls +``` +找到对应的镜像已被预加载成功 +``` +REF TYPE DIGEST SIZE PLATFORMS LABELS +registry.cn-hangzhou.aliyuncs.com/jilimoxing/test:nginx application/vnd.docker.distribution.manifest.v2+json sha256:73e957703f1266530db0aeac1fd6a3f87c1e59943f4c13eb340bb8521c6041d7 67.3 MiB linux/amd64 +``` diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/setup/install-with-keadm.md b/i18n/zh/docusaurus-plugin-content-docs/current/setup/install-with-keadm.md index 13e3e7499e..2f9d7ce967 100644 --- a/i18n/zh/docusaurus-plugin-content-docs/current/setup/install-with-keadm.md +++ b/i18n/zh/docusaurus-plugin-content-docs/current/setup/install-with-keadm.md @@ -50,6 +50,17 @@ KubeEdge cloudcore is running, For logs visit: /var/log/kubeedge/cloudcore.log 当您看到以上信息,说明 KubeEdge 的云端组件 cloudcore 已经成功运行。 +**特殊场景:** +边缘计算的硬件条件不好的情况,这里我们需要打上标签,让一些应用不扩展到edge节点上去。 kube-proxy和其他的一些应用不是必须部署在边缘端,所以我们可以对他们进行处理。 + + +``` +kubectl get daemonset -n kube-system |grep -v NAME |awk '{print $1}' | xargs -n 1 kubectl patch daemonset -n kube-system --type='json' -p='[{"op": "replace","path": "/spec/template/spec/affinity","value":{"nodeAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":{"nodeSelectorTerms":[{"matchExpressions":[{"key":"node-role.kubernetes.io/edge","operator":"DoesNotExist"}]}]}}}}]' + +``` + +如何处理kube-proxy,可以参考本文中在'启用 kubectl logs 功能'部分提到的 [2种方法](#anchor-name) + ### keadm beta init 如果您想要使用容器化方式部署云端组件 cloudcore ,您可以使用 `keadm beta init` 进行云端组件安装。 @@ -287,7 +298,7 @@ KubeEdge edgecore is running, For logs visit: /var/log/kubeedge/edgecore.log 如果您无法重启 edgecore,请检查是否是由于 `kube-proxy` 的缘故,同时杀死这个进程。 **kubeedge** 默认不纳入该进程,我们使用 [edgemesh](https://github.com/kubeedge/kubeedge/blob/master/docs/proposals/edgemesh-design.md) 来进行替代 - **注意:** 可以考虑避免 `kube-proxy` 部署在 edgenode 上。有两种解决方法: + **注意:** 可以考虑避免 `kube-proxy` 部署在 edgenode 上。有两种解决方法: 1. 通过调用 `kubectl edit daemonsets.apps -n kube-system kube-proxy` 添加以下设置: diff --git a/i18n/zh/docusaurus-plugin-content-pages/case-studies/Raisecom-Tech/index.mdx b/i18n/zh/docusaurus-plugin-content-pages/case-studies/Raisecom-Tech/index.mdx new file mode 100644 index 0000000000..f6335637c0 --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-pages/case-studies/Raisecom-Tech/index.mdx @@ -0,0 +1,26 @@ +--- +date: 2024-05-27 +title: 瑞斯康达科技股份有限公司 +subTitle: +description: 采用KubeEdge作为智能监控方案实施的重要组成部分,有效完成了对工厂安全的AI监控,减少了安全事故的发生,提高了工厂的生产效率。 +tags: + - 用户案例 +--- + +# 基于KubeEdge的智能监控方案 + +## 挑战 + +保障工业生产安全是瑞斯康达制造工厂的重要需求,传统工人的生产安全检测方式采用人工方式,速度慢、效率低,工人不遵守安全要求的情况仍时有发生,且容易被忽视,具有很大的安全隐患,影响工厂的生产效率。 + +## 解决方案 + +开发基于人工智能算法的工业智能监控应用,以取代人工监控。但仅有智能监控应用是不够的,智能边缘应用的部署和管理、云端训练与边缘推理的协同等新问题也随之出现,成为该解决方案在工业生产环境中大规模应用的瓶颈。 + +中国电信研究院将KubeEdge作为智能监控方案实施的重要组成部分,帮助瑞斯康达科技解决该问题。中国电信研究院架构师Xiaohou Shi完成了该方案的设计。该案例通过工业视觉应用,结合深度学习算法,实时监控工厂工人的安全状态。引入KubeEdge作为边缘计算平台,用于管理边缘设备和智能监控应用的运行环境。通过KubeEdge,可以在云端对监控模型进行训练,并自动部署到边缘节点进行推理执行,提高运营效率,降低运维成本。 + +## 优势 + +在此应用场景中,KubeEdge完成了边缘应用的统一管理,同时KubeEdge还可以充分利用云边协同的优势,借助KubeEdge作为边缘计算平台,有效完成了对工厂安全的AI监控,减少了安全事故的发生,提高了工厂的生产效率。 + +基于此成功案例,未来将在KubeEdge上部署更多深度学习算法,解决边缘计算方面的问题,未来也将与KubeEdge开展更多场景化工业智能应用的合作。 \ No newline at end of file diff --git a/i18n/zh/docusaurus-plugin-content-pages/case-studies/XingHai/index.mdx b/i18n/zh/docusaurus-plugin-content-pages/case-studies/XingHai/index.mdx new file mode 100644 index 0000000000..6aa505e6fe --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-pages/case-studies/XingHai/index.mdx @@ -0,0 +1,30 @@ +--- +date: 2024-05-27 +title: 兴海物联科技有限公司 +subTitle: +description: 兴海物联采用KubeEdge构建了云边端协同的智慧校园,大幅提升了校园管理效率。 +tags: + - 用户案例 +--- + +# 基于KubeEdge构建智慧校园 + +## 挑战 + +兴海物联是一家利用建筑物联网平台、智能硬件、人工智能等技术,提供智慧楼宇综合解决方案的物联网企业,是中海物业智慧校园标准的制定者和践行者,是华为智慧校园解决方案核心全链条服务商。 + +该公司服务客户遍及中国及全球80个主要城市,已交付项目741个,总建筑面积超过1.56亿平方米,业务涵盖高端住宅、商业综合体、超级写字楼、政府物业、工业园区等多种建筑类型。 + +近年来,随着业务的拓展和园区业主对服务品质要求的不断提升,兴海物联致力于利用边缘计算和物联网技术构建可持续发展的智慧校园,提高园区运营和管理效率。 + +## 解决方案 + +如今兴海物联的服务领域越来越广泛,因此其解决方案需要具备可移植性和可复制性,需要保证数据的实时处理和安全的存储。KubeEdge以云原生开发和边云协同为设计理念,已成为兴海物联打造智慧校园不可或缺的一部分。 + +- 容器镜像一次构建,随处运行,有效降低新建园区部署运维复杂度。 +- 边云协同使数据在边缘处理,确保实时性和安全性,并降低网络带宽成本。 +- KubeEdge 可以轻松添加硬件,并支持常见协议。无需二次开发。 + +## 优势 + +兴海物联基于KubeEdge和自有兴海物联云平台,构建了云边端协同的智慧校园,大幅提升了校园管理效率。在AI的助力下,近30%的重复性工作实现了自动化。未来,兴海物联还将继续与KubeEdge合作,推出基于KubeEdge的智慧校园解决方案。 \ No newline at end of file diff --git a/src/pages/case-studies/Raisecom-Tech/index.mdx b/src/pages/case-studies/Raisecom-Tech/index.mdx new file mode 100644 index 0000000000..847962c07f --- /dev/null +++ b/src/pages/case-studies/Raisecom-Tech/index.mdx @@ -0,0 +1,24 @@ +--- +date: 2024-05-27 +title: Raisecom Technology CO.,Ltd +subTitle: +description: Using KubeEdge as an important part of the implementation of the intelligent monitoring solution effectively completes the AI monitoring of factory safety, reduces the occurrence of safety accidents, and improves the production efficiency of the factory. + +tags: + - UserCase +--- + +# Intelligent monitoring solution based on KubeEdge + +## Challenge +It is an important demand for the manufactory of Raisecom Technology to ensure the industrial production safety. Traditional workers' production safety was detected manually, which was slow and inefficient. The situation that workers did not obey the safety requirements still happened, and it could be ignored sometimes, which could generate great safety risks and affect the production efficiency of the factory. + +## Solution +An industrial intelligent monitoring application with AI algorithms was developed to replace the manual method. An intelligent application alone was not enough and new problems arose such as the deployment and management of the intelligent edge application and the collaboration between training on the cloud and reasoning on the edge, which could become a bottleneck for the largescale application of the solution in the industrial production environment. + +China Telecom Research Institute used KubeEdge as an important part of the implementation of the intelligent monitoring solution to help Raisecom Technology to solve the problem. Architect Xiaohou Shi from China Telecom Research Institute completed the design of this solution. In this case, the safety status of factory workers was monitored by the industrial vision application in real time with the deep learning algorithm. KubeEdge was introduced as an edge computing platform for the management of the edge devices and the running environment of the intelligent monitoring application. The monitoring model could be trained on the cloud and deployed to the edge nodes for reasoning execution automatically via KubeEdge, which could improve the efficiency of the operation and reduce the cost of the maintenance. + +## Impact +In this application scenario, KubeEdge completed the unified management of edge applications. KubeEdge could also make full use of the advantages of the collaboration of the cloud and edge. With the help of KubeEdge as the edge computing platform, the monitoring on safety of the manufactory with AI was completed effectively, which reduced the occurrence of safety accidents and improved the production efficiency of the manufactory. + +Based on this successful case, more deep learning algorithm will be deployed on KubeEdge to handle problems on edge computing. More cooperation about scenario-faced industrial intelligent application with KubeEdge will be carried out in the future. diff --git a/src/pages/case-studies/XingHai/index.mdx b/src/pages/case-studies/XingHai/index.mdx new file mode 100644 index 0000000000..28955d8761 --- /dev/null +++ b/src/pages/case-studies/XingHai/index.mdx @@ -0,0 +1,30 @@ +--- +date: 2024-05-27 +title: XingHai IoT +subTitle: +description: Xinghai IoT uses KubeEdge to build a smart campus with cloud-edge-device collaboration, which greatly improves campus management efficiency. +tags: + - UserCase +--- + +# Building smart campuses based on KubeEdge + +## Challenge + +Xinghai IoT is an IoT company that provides comprehensive smart building solutions by leveraging a construction IoT platform, intelligent hardware, and AI. It is a creator and practitioner of smart campus standards for China Overseas Property Management and a core full-chain service provider of smart campus solutions from Huawei. + +The company serves its customers in 80 major cities in China and around the world. It has delivered 741 projects, covering more than 156 million square meters. Its business covers a diverse range of building types, such as high-end residential buildings, commercial complexes, super office buildings, government properties, and industrial parks. + +In recent years, as its business expands and occupant demands for service quality grow, Xinghai IoT has been committed to using edge computing and IoT to build sustainable smart campuses, improving efficiency for campus operations and management. + +## Highlights + +Xinghai IoT now offers services in a wide range of areas. Therefore, its solutions should be portable and replicable and need to ensure real-time data processing and secure data storage. KubeEdge, with services designed for cloud native development and edge-cloud synergy, has become an indispensable part of Xinghai IoT for building smart campuses. + +- Container images are built once to run anywhere, effectively reducing the deployment and O&M complexity of new campuses. +- Edge-cloud synergy enables data to be processed at the edge, ensuring real-time performance and security and lowering network bandwidth costs. +- KubeEdge makes adding hardware easy and supports common protocols. No secondary development is needed. + +## Benefits + +Xinghai IoT built a smart campus with cloud-edge-device synergy based on KubeEdge and its own Xinghai IoT cloud platform, greatly improving the efficiency of campus management. With AI assistance, nearly 30% of the repetitive work is automated. In the future, Xinghai IoT will continue to collaborate with KubeEdge to launch KubeEdge-based smart campus solutions. \ No newline at end of file