diff --git a/KubeEdge v1.14.2/Apache-beam-analysis/Readme.md b/KubeEdge v1.14.2/Apache-beam-analysis/Readme.md new file mode 100644 index 0000000..f588863 --- /dev/null +++ b/KubeEdge v1.14.2/Apache-beam-analysis/Readme.md @@ -0,0 +1,23 @@ +# Data Analytics with Apache Beam + +## Description + +![High level architecture](images/High_level_Arch.png "High Level Architecture") + +The main aim of analytics engine is to get data from mqtt broker in stream format and apply rules on incoming data in real time and produce alert/action on mqtt broker. Getting data through pipeline and applying analysis function is done by using Apache Beam. + +### Apache Beam + +Apache Beam is an open source, unified model for defining both batch and streaming data-parallel processing pipelines. Using one of the open source Beam SDKs, we can build a program that defines the pipeline. + + +#### Why use Apache Beam for analytics + +There are many frameworks like Hadoop, Spark, Flink, Google Cloud Dataflow etc for stream processing. But there was no unified API to binds all such frameworks and data sources. It was needed to abstract out the application logic from these Big Data frameworks. Apache Beam framework provides this abstraction between your application logic and big data ecosystem. +- A generic dataflow-based model for building an abstract pipeline which could be run on any runtime like Flink/Samza etc. +- The same pipeline code can be executed on cloud(eg. Huawei Cloud Stream based on Apache Flink) and on the edge with a custom backend which can efficiently schedule workloads in an edge cluster and perform distributed analytics. +- Apache Beam integrates well with TensorFlow for machine learning which is a key use-case for edge. +- Beam has support for most of the functions required for stream processing and analytics. + + + diff --git a/KubeEdge v1.14.2/Apache-beam-analysis/index.json b/KubeEdge v1.14.2/Apache-beam-analysis/index.json new file mode 100644 index 0000000..2739217 --- /dev/null +++ b/KubeEdge v1.14.2/Apache-beam-analysis/index.json @@ -0,0 +1,30 @@ +{ + "title": "KubeEdge Deployment", + "description": "Data Analytics with Apache Beam", + "details": { + "steps": [ + { + "title": "Step 1/4", + "text": "step1.md" + }, + { + "title": "Step 2/4", + "text": "step2.md" + }, + { + "title": "Step 3/4", + "text": "step3.md" + }, + { + "title": "Step 4/4", + "text": "step4.md" + } + ], + "intro": { + "text": "intro.md" + } + }, + "backend": { + "imageid": "ubuntu" + } +} \ No newline at end of file diff --git a/KubeEdge v1.14.2/Apache-beam-analysis/intro.md b/KubeEdge v1.14.2/Apache-beam-analysis/intro.md new file mode 100644 index 0000000..44164af --- /dev/null +++ b/KubeEdge v1.14.2/Apache-beam-analysis/intro.md @@ -0,0 +1,31 @@ +# Data Analytics with Apache Beam + +## Description + +![High level architecture](images/High_level_Arch.png "High Level Architecture") + +The main aim of analytics engine is to get data from mqtt broker in stream format and apply rules on incoming data in real time and produce alert/action on mqtt broker. Getting data through pipeline and applying analysis function is done by using Apache Beam. + +### Apache Beam + +Apache Beam is an open source, unified model for defining both batch and streaming data-parallel processing pipelines. Using one of the open source Beam SDKs, we can build a program that defines the pipeline. + + +#### Demo 1.1 [Real-time alert]:Read batch data from MQTT,filter and generate alerts +- Basic mqtt read/write support in Apache Beam for batch data +- Reads data from an mqtt topic +- Create PCollection of read data and use it as the initial data for pipeline +- Do a filtering over the data +- Publish an alert on a topic if reading exceeds the value +![Demo1.1](images/Demo1.1.png "Demo1.1:Read batch data from MQTT,filter and generate alerts") + +#### Demo 1.2 [Filter Streaming Data]: Reads streaming data from MQTT, filter at regular intervals +- Read streaming data using MQTT +- Do a filtering over the data at fixed time intervals +![demo1.2](images/Demo1.2.png "Demo1.2:Reads streaming data from MQTT, filter at regular intervals") + +### Prerequisites +- Golang(version: 1.14+) +- KubeEdge(version: v1.5+) +- Docker(version: 18.09-ce+) + diff --git a/KubeEdge v1.14.2/Apache-beam-analysis/step1.md b/KubeEdge v1.14.2/Apache-beam-analysis/step1.md new file mode 100644 index 0000000..ef76d05 --- /dev/null +++ b/KubeEdge v1.14.2/Apache-beam-analysis/step1.md @@ -0,0 +1,14 @@ +# Deploy pipeline application + +### Prerequisites + +- Golang(version: 1.14+) +- KubeEdge(version: v1.5+) +- Docker(version: 18.09-ce+) + +#### For demo 1.1: Pull the docker image from dockerhub by using following command + +``` +sudo docker pull containerise/ke_apache_beam:ke_apache_analysis_v1.1 +```{{execute}} + diff --git a/KubeEdge v1.14.2/Apache-beam-analysis/step2.md b/KubeEdge v1.14.2/Apache-beam-analysis/step2.md new file mode 100644 index 0000000..99ac038 --- /dev/null +++ b/KubeEdge v1.14.2/Apache-beam-analysis/step2.md @@ -0,0 +1,22 @@ +# Deploy pipeline application + +### Prerequisites + +- Golang(version: 1.14+) +- KubeEdge(version: v1.5+) +- Docker(version: 18.09-ce+) + +#### For demo 1.2: Pull the docker image from dockerhub by using following command + +``` +sudo docker pull containerise/ke_apache_beam:ke_apache_analysis_v1.2 +```{{execute}} + +#### Run the command + +This will shows all images created. Check image named ke_apache_analysis_v1.1 or ke_apache_analysis_v1.2 + +``` +docker images +```{{execute}} + diff --git a/KubeEdge v1.14.2/Apache-beam-analysis/step3.md b/KubeEdge v1.14.2/Apache-beam-analysis/step3.md new file mode 100644 index 0000000..a66c08c --- /dev/null +++ b/KubeEdge v1.14.2/Apache-beam-analysis/step3.md @@ -0,0 +1,20 @@ +## Setup the KubeEdge v.1.24.2 +follow this link + +``` +https://killercoda.com/sarthak-009/scenario/deployment +``` + + +### Try out a application deployment by following below steps. + +``` +kubectl apply -f https://github.com/kubeedge/examples/blob/master/apache-beam-analysis/deployment.yaml +```{{execute}} + +### Then you can use below command to check if the application is normally running. + +``` +kubectl get pods +```{{execute}} + diff --git a/KubeEdge v1.14.2/Apache-beam-analysis/step4.md b/KubeEdge v1.14.2/Apache-beam-analysis/step4.md new file mode 100644 index 0000000..cc7c546 --- /dev/null +++ b/KubeEdge v1.14.2/Apache-beam-analysis/step4.md @@ -0,0 +1,23 @@ +## Clone the repository + +``` +git clone https://github.com/kubeedge/examples.git +```{{execute}} +Change the directory to apache-beam-analysis + +``` +cd examples/apache-beam-analysis +```{{execute}} + +### Add following vendor packages: + +``` +go get -u github.com/yosssi/gmq/mqtt +go get -u github.com/yosssi/gmq/mqtt/client +```{{execute}} + +run: +``` +go build testmachine.go +./testmachine +```{{execute}} \ No newline at end of file diff --git a/KubeEdge v1.14.2/Bluetooth-CC2650/Readme.md b/KubeEdge v1.14.2/Bluetooth-CC2650/Readme.md new file mode 100644 index 0000000..6319a69 --- /dev/null +++ b/KubeEdge v1.14.2/Bluetooth-CC2650/Readme.md @@ -0,0 +1,34 @@ +# Bluetooth With KubeEdge Using CC2650 + +Users can make use of KubeEdge platform to connect and control their bluetooth devices, provided, the user is aware of the of the data sheet information for their device. +Kubernetes Custom Resource Definition (CRD) and KubeEdge bluetooth mapper is being used to support this feature, using which users can control their device from the cloud. Texas Instruments [CC2650 SensorTag device](http://processors.wiki.ti.com/index.php/CC2650_SensorTag_User%27s_Guide) is being shown here as an example. + + +## Description + +KubeEdge support for bluetooth protocol has been demonstrated here by making use of Texas Instruments CC2650 SensorTag device. +This section contains instructions on how to make use of bluetooth mapper of KubeEdge to control CC2650 SensorTag device. + + We will only be focusing on the following features of CC2650 :- + + ``` + 1. IR Temperature + 2. IO-Control : + 2.1 Red Light + 2.2 Greem Light + 2.3 Buzzer + 2.4 Red Light with Buzzer + 2.5 Green Light with Buzzer + 2.6 Red Light along with Green Light + 2.7 Red Light, Green Light along with Buzzer + + ``` + + The bluetooth mapper has the following major components :- + - Action Manager + - Scheduler + - Watcher + - Controller + - Data Converter + + More details on bluetooth mapper can be found [here](https://github.com/kubeedge/kubeedge/blob/master/docs/mappers/bluetooth_mapper.md#bluetooth-mapper). \ No newline at end of file diff --git a/KubeEdge v1.14.2/Bluetooth-CC2650/final.md b/KubeEdge v1.14.2/Bluetooth-CC2650/final.md new file mode 100644 index 0000000..b5ef6b1 --- /dev/null +++ b/KubeEdge v1.14.2/Bluetooth-CC2650/final.md @@ -0,0 +1,4 @@ +Turn ON the CC2650 SensorTag device
+ +The bluetooth mapper is now running, You can monitor the logs of the mapper by using docker logs. You can also play around with the device twin state by altering the desired property in the device instance +and see the result reflect on the SensorTag device. The configurations of the bluetooth mapper can be altered at runtime Please click [Runtime Configurations](https://github.com/kubeedge/kubeedge/blob/master/docs/mappers/bluetooth_mapper.md#runtime-configuration-modifications) for more details. \ No newline at end of file diff --git a/KubeEdge v1.14.2/Bluetooth-CC2650/intro.md b/KubeEdge v1.14.2/Bluetooth-CC2650/intro.md new file mode 100644 index 0000000..fdc6c14 --- /dev/null +++ b/KubeEdge v1.14.2/Bluetooth-CC2650/intro.md @@ -0,0 +1,48 @@ +# Bluetooth With KubeEdge Using CC2650 + + +Users can make use of KubeEdge platform to connect and control their bluetooth devices, provided, the user is aware of the of the data sheet information for their device. +Kubernetes Custom Resource Definition (CRD) and KubeEdge bluetooth mapper is being used to support this feature, using which users can control their device from the cloud. Texas Instruments [CC2650 SensorTag device](http://processors.wiki.ti.com/index.php/CC2650_SensorTag_User%27s_Guide) is being shown here as an example. + + +## Description + +KubeEdge support for bluetooth protocol has been demonstrated here by making use of Texas Instruments CC2650 SensorTag device. +This section contains instructions on how to make use of bluetooth mapper of KubeEdge to control CC2650 SensorTag device. + + We will only be focusing on the following features of CC2650 :- + + ```shell + 1. IR Temperature + 2. IO-Control : + 2.1 Red Light + 2.2 Greem Light + 2.3 Buzzer + 2.4 Red Light with Buzzer + 2.5 Green Light with Buzzer + 2.6 Red Light along with Green Light + 2.7 Red Light, Green Light along with Buzzer + + ``` + + The bluetooth mapper has the following major components :- + - Action Manager + - Scheduler + - Watcher + - Controller + - Data Converter + + More details on bluetooth mapper can be found [here](https://github.com/kubeedge/kubeedge/blob/master/docs/mappers/bluetooth_mapper.md#bluetooth-mapper). + + +## Prerequisites + +### Hardware Prerequisites + +1. Texas instruments CC2650 bluetooth device +2. Linux based edge node with bluetooth support (An Ubuntu 18.04 laptop has been used in this demo) + +### Software Prerequisites + +1. Golang (1.14+) +2. KubeEdge (v1.5+) \ No newline at end of file diff --git a/KubeEdge v1.14.2/Bluetooth-CC2650/step1.md b/KubeEdge v1.14.2/Bluetooth-CC2650/step1.md new file mode 100644 index 0000000..1af45b5 --- /dev/null +++ b/KubeEdge v1.14.2/Bluetooth-CC2650/step1.md @@ -0,0 +1,22 @@ +## Prerequisites + +### Hardware Prerequisites + +1. Texas instruments CC2650 bluetooth device +2. Linux based edge node with bluetooth support (An Ubuntu 18.04 laptop has been used in this demo) + +### Software Prerequisites + +1. Golang (1.14+) +2. KubeEdge (v1.5+) + +## Steps to reproduce + +#### Clone and run KubeEdge. + Please ensure that the kubeedge setup is up and running before execution of step 4 (mentioned below). + +#### Clone the kubeedge/examples repository. + +``` +git clone https://github.com/kubeedge/examples.git $GOPATH/src/github.com/kubeedge/examples +```{{execute}} \ No newline at end of file diff --git a/KubeEdge v1.14.2/Bluetooth-CC2650/step2.md b/KubeEdge v1.14.2/Bluetooth-CC2650/step2.md new file mode 100644 index 0000000..230ac3d --- /dev/null +++ b/KubeEdge v1.14.2/Bluetooth-CC2650/step2.md @@ -0,0 +1,8 @@ +## Create the CC2650 SensorTag device model and device instance. + +``` +cd $GOPATH/src/github.com/kubeedge/examples/bluetooth-CC2650-demo/crds +kubectl apply -f CC2650-device-model.yaml +sed -i "s#edge-node##g" CC2650-device-instance.yaml +kubectl apply -f CC2650-device-instance.yaml +``` diff --git a/KubeEdge v1.14.2/Bluetooth-CC2650/step3.md b/KubeEdge v1.14.2/Bluetooth-CC2650/step3.md new file mode 100644 index 0000000..2132770 --- /dev/null +++ b/KubeEdge v1.14.2/Bluetooth-CC2650/step3.md @@ -0,0 +1,7 @@ +## Please ensure that bluetooth service of your device is ON + +#### Set 'bluetooth=true' label for the node (This label is a prerequisite for the scheduler to schedule bluetooth_mapper pod on the node [which meets the hardware / software prerequisites] ) + +``` +kubectl label nodes bluetooth=true +``` \ No newline at end of file diff --git a/KubeEdge v1.14.2/Bluetooth-CC2650/step4.md b/KubeEdge v1.14.2/Bluetooth-CC2650/step4.md new file mode 100644 index 0000000..4026cb8 --- /dev/null +++ b/KubeEdge v1.14.2/Bluetooth-CC2650/step4.md @@ -0,0 +1,24 @@ +#### Copy the configuration file that has been provided, into its correct path. Please note that the configuration file can be altered as to suit your requirement + +``` +cp $GOPATH/src/github.com/kubeedge/examples/bluetooth-CC2650-demo/config.yaml + +$GOPATH/src/github.com/kubeedge/kubeedge/mappers/bluetooth_mapper/configuration/ +``` + +#### Build the mapper by following the steps given below. + +``` +cd $GOPATH/src/github.com/kubeedge/kubeedge +make bluetoothdevice_image +docker tag bluetooth_mapper:v1.0 /bluetooth_mapper:v1.0 +docker push /bluetooth_mapper:v1.0 +``` + +
+Note: Before trying to push the docker image to the remote repository please ensure that you have signed into docker from your node, if not please type the followig command to sign in +docker login +
+Please enter your username and password when prompted + + diff --git a/KubeEdge v1.14.2/Bluetooth-CC2650/step5.md b/KubeEdge v1.14.2/Bluetooth-CC2650/step5.md new file mode 100644 index 0000000..c61ad7a --- /dev/null +++ b/KubeEdge v1.14.2/Bluetooth-CC2650/step5.md @@ -0,0 +1,12 @@ +### Deploy the mapper by following the steps given below. + +``` +cd $GOPATH/src/github.com/kubeedge/kubeedge/mappers/bluetooth_mapper +``` +#### Please enter the following details in the deployment.yaml :- +1. Replace with the name of your edge node at spec.template.spec.voluems.configMap.name
+2. Replace with your dockerhub username at spec.template.spec.containers.image
+ +``` +kubectl create -f deployment.yaml +``` \ No newline at end of file diff --git a/KubeEdge v1.14.2/Images/scenarios.png b/KubeEdge v1.14.2/Images/scenarios.png new file mode 100644 index 0000000..dc59146 Binary files /dev/null and b/KubeEdge v1.14.2/Images/scenarios.png differ diff --git a/KubeEdge v1.14.2/KubeEdge-Counter-Demo/Readme.md b/KubeEdge v1.14.2/KubeEdge-Counter-Demo/Readme.md new file mode 100644 index 0000000..b623dd9 --- /dev/null +++ b/KubeEdge v1.14.2/KubeEdge-Counter-Demo/Readme.md @@ -0,0 +1,7 @@ +## KubeEdge Counter Demo + +### Description + +Counter is a pseudo device that user can run this demo without any extra physical devices. + +Counter run at edge side, and user can control it in web from cloud side, also can get counter value in web from cloud side. \ No newline at end of file diff --git a/KubeEdge v1.14.2/KubeEdge-Counter-Demo/finish.md b/KubeEdge v1.14.2/KubeEdge-Counter-Demo/finish.md new file mode 100644 index 0000000..30e8a39 --- /dev/null +++ b/KubeEdge v1.14.2/KubeEdge-Counter-Demo/finish.md @@ -0,0 +1,11 @@ +### Control counter by visiting Web App Page + +* Visit web app page by the web app link `MASTER_NODE_IP:80`. + +* Choose `ON` option, and click `Execute`, then user can see counter start to count by `docker logs -f counter-container-id` at edge side. + +* Choose `STATUS` option, then click `Execute` to get the counter status, finally counter status and current counter value will display in web. + + also you can watch counter status by `kubectl get device counter -o yaml -w` at cloud side. + +* Choose `OFF` option, and click `Execute`, counter stop work at edge side. \ No newline at end of file diff --git a/KubeEdge v1.14.2/KubeEdge-Counter-Demo/index.json b/KubeEdge v1.14.2/KubeEdge-Counter-Demo/index.json new file mode 100644 index 0000000..977fd2e --- /dev/null +++ b/KubeEdge v1.14.2/KubeEdge-Counter-Demo/index.json @@ -0,0 +1,29 @@ +{ + "title": "KubeEdge Deployment", + "description": "KubeEdge Counter Demo", + "details": { + "steps": [ + { + "title": "Step 1/3", + "text": "step1.md" + }, + { + "title": "Step 2/3", + "text": "step2.md" + }, + { + "title": "Step 3/3", + "text": "step3.md" + } + ], + "intro": { + "text": "intro.md" + }, + "finish": { + "text": "finish.md" + } + }, + "backend": { + "imageid": "ubuntu" + } +} \ No newline at end of file diff --git a/KubeEdge v1.14.2/KubeEdge-Counter-Demo/intro.md b/KubeEdge v1.14.2/KubeEdge-Counter-Demo/intro.md new file mode 100644 index 0000000..6ca58da --- /dev/null +++ b/KubeEdge v1.14.2/KubeEdge-Counter-Demo/intro.md @@ -0,0 +1,24 @@ +## Prerequisites + +### Hardware Prerequisites + +* RaspBerry PI (RaspBerry PI 4 has been used for this demo). + +### Software Prerequisites + +* A running Kubernetes cluster. + + *NOTE*: + + add follows `--insecure-port=8080` and `--insecure-bind-address=0.0.0.0` options into */etc/kubernetes/manifests/kube-apiserver.yaml* + +* KubeEdge v1.5+ + +* MQTT Broker is running on Raspi. + +## Steps to run the demo + +### Create the device model and device instance for the counter + +With the Device CRD APIs now installed in the cluster, we create the device model and instance for the counter using the yaml files. + diff --git a/KubeEdge v1.14.2/KubeEdge-Counter-Demo/step1.md b/KubeEdge v1.14.2/KubeEdge-Counter-Demo/step1.md new file mode 100644 index 0000000..4bdd4cd --- /dev/null +++ b/KubeEdge v1.14.2/KubeEdge-Counter-Demo/step1.md @@ -0,0 +1,21 @@ +## clone the demo code + +``` +git clone https://github.com/kubeedge/examples.git +``` + +``` +cd $GOPATH/src/github.com/kubeedge/examples/kubeedge-counter-demo +``` +
+replace "" with your edge node name +
+ +``` +sed -i "s#edge-node##" crds/kubeedge-counter-instance.yaml +``` + +``` +kubectl create -f crds/kubeedge-counter-model.yaml +kubectl create -f crds/kubeedge-counter-instance.yaml +``` \ No newline at end of file diff --git a/KubeEdge v1.14.2/KubeEdge-Counter-Demo/step2.md b/KubeEdge v1.14.2/KubeEdge-Counter-Demo/step2.md new file mode 100644 index 0000000..e5f0998 --- /dev/null +++ b/KubeEdge v1.14.2/KubeEdge-Counter-Demo/step2.md @@ -0,0 +1,16 @@ +### Run KubeEdge Web App + +The KubeEdge Web App runs in a VM on cloud. + +``` +cd $GOPATH/src/github.com/kubeedge/examples/kubeedge-counter-demo/web-controller-app +make +make docker +``` + +``` +cd $GOPATH/src/github.com/kubeedge/examples/kubeedge-counter-demo/crds +kubectl create -f kubeedge-web-controller-app.yaml +``` + +**Note: instance must be created after model and deleted before model.** \ No newline at end of file diff --git a/KubeEdge v1.14.2/KubeEdge-Counter-Demo/step3.md b/KubeEdge v1.14.2/KubeEdge-Counter-Demo/step3.md new file mode 100644 index 0000000..5dac099 --- /dev/null +++ b/KubeEdge v1.14.2/KubeEdge-Counter-Demo/step3.md @@ -0,0 +1,17 @@ +### Run KubeEdge Pi Counter App + +The KubeEdge Counter App run in raspi. + +``` +cd $GOPATH/src/github.com/kubeedge/examples/kubeedge-counter-demo/counter-mapper +make +make docker +``` +``` +cd $GOPATH/src/github.com/kubeedge/examples/kubeedge-counter-demo/crds +kubectl create -f kubeedge-pi-counter-app.yaml +``` + +The App will subscribe to the `$hw/events/device/counter/twin/update/document` topic, and when it receives the expected control command on the topic, it will turn on/off the counter, also it will fresh counter value and publish value to `$hw/events/device/counter/twin/update` topic, then the latest counter status will be sychronized between edge and cloud. + +At last, user can get the counter status at cloud side. diff --git a/KubeEdge v1.14.2/KubeEdge-Twitter-Demo/Readme.md b/KubeEdge v1.14.2/KubeEdge-Twitter-Demo/Readme.md new file mode 100644 index 0000000..83a8579 --- /dev/null +++ b/KubeEdge v1.14.2/KubeEdge-Twitter-Demo/Readme.md @@ -0,0 +1,3 @@ +# KubeEdge Twitter Demo + +A user tweets `kubeedge play ` to play the track. The tweet metadata is pushed to the edge node and the track is played on the speaker connected to the edge node. diff --git a/KubeEdge v1.14.2/KubeEdge-Twitter-Demo/index.json b/KubeEdge v1.14.2/KubeEdge-Twitter-Demo/index.json new file mode 100644 index 0000000..60a8758 --- /dev/null +++ b/KubeEdge v1.14.2/KubeEdge-Twitter-Demo/index.json @@ -0,0 +1,42 @@ +{ + "title": "KubeEdge Deployment", + "description": "KubeEdge Twitter Demo", + "details": { + "steps": [ + { + "title": "Step 1/7", + "text": "step1.md" + }, + { + "title": "Step 2/7", + "text": "step2.md" + }, + { + "title": "Step 3/7", + "text": "step3.md" + }, + { + "title": "Step 4/7", + "text": "step4.md" + }, + { + "title": "Step 5/7", + "text": "step5.md" + }, + { + "title": "Step 6/7", + "text": "step6.md" + }, + { + "title": "Step 7/7", + "text": "step7.md" + } + ], + "intro": { + "text": "intro.md" + } + }, + "backend": { + "imageid": "ubuntu" + } +} \ No newline at end of file diff --git a/KubeEdge v1.14.2/KubeEdge-Twitter-Demo/intro.md b/KubeEdge v1.14.2/KubeEdge-Twitter-Demo/intro.md new file mode 100644 index 0000000..922071e --- /dev/null +++ b/KubeEdge v1.14.2/KubeEdge-Twitter-Demo/intro.md @@ -0,0 +1,11 @@ +## Prerequisites + +### Hardware Prerequisites +- RaspBerry-Pi (RaspBerry-Pi 3 has been used for this demo). This will be the edge node to which the speaker will be connected. +- A speaker for playing the track. + +### Software Prerequisites +- A running Kubernetes cluster. +- KubeEdge v1.5.0+ +- In order to control the speaker and play the desired track , we need to manage the speaker connected to the rpi. + KubeEdge allows us to manage devices using K8S custom resource definitions. The design proposal is [here](https://github.com/kubeedge/kubeedge/blob/master/docs/proposals/device-crd.md). Apply the CRD schema yamls available [here](https://github.com/kubeedge/kubeedge/tree/master/build/crds/devices) using kubectl. \ No newline at end of file diff --git a/KubeEdge v1.14.2/KubeEdge-Twitter-Demo/step1.md b/KubeEdge v1.14.2/KubeEdge-Twitter-Demo/step1.md new file mode 100644 index 0000000..c023fe6 --- /dev/null +++ b/KubeEdge v1.14.2/KubeEdge-Twitter-Demo/step1.md @@ -0,0 +1,6 @@ +## Get the Demo code + +``` +git clone https://github.com/kubeedge/examples.git +``` + diff --git a/KubeEdge v1.14.2/KubeEdge-Twitter-Demo/step2.md b/KubeEdge v1.14.2/KubeEdge-Twitter-Demo/step2.md new file mode 100644 index 0000000..d167a01 --- /dev/null +++ b/KubeEdge v1.14.2/KubeEdge-Twitter-Demo/step2.md @@ -0,0 +1,11 @@ +### Create the device model and device instance for the speaker + +With the Device CRD APIs now installed in the cluster , we now create the device model and instance for the speaker using the yaml files under examples/crds. + +### Create Secret for Twitter Credentials +- The cloud app in the demo needs to watch KubeEdge tweets. For this the application needs to sign the requests with a Twitter account. +Follow the steps mentioned here [Guide for reference](https://docs.inboundnow.com/guide/create-twitter-application/) to generate the OAuth credentials. Create a Kubernetes Secret`twittersecret` with the credentials as below : + +``` +kubectl create secret generic twittersecret --from-literal=CONSUMER_KEY= --from-literal=CONSUMER_SECRET= --from-literal=ACCESS_TOKEN= --from-literal=ACCESS_TOKEN_SECRET= +``` \ No newline at end of file diff --git a/KubeEdge v1.14.2/KubeEdge-Twitter-Demo/step3.md b/KubeEdge v1.14.2/KubeEdge-Twitter-Demo/step3.md new file mode 100644 index 0000000..cb118e3 --- /dev/null +++ b/KubeEdge v1.14.2/KubeEdge-Twitter-Demo/step3.md @@ -0,0 +1,8 @@ +### Run the ke-tweeeter app + +- The ke-tweeter-app runs in a VM on the cloud and watches for KubeTweets. It can deployed using a Kubernetes deployment yaml + +``` + cd $GOPATH/github.com/ke-twitter-demo/ke-tweeter/deployments/ + kubectl create -f ke-tweeter.yaml +``` \ No newline at end of file diff --git a/KubeEdge v1.14.2/KubeEdge-Twitter-Demo/step4.md b/KubeEdge v1.14.2/KubeEdge-Twitter-Demo/step4.md new file mode 100644 index 0000000..e69de29 diff --git a/KubeEdge v1.14.2/KubeEdge-Twitter-Demo/step5.md b/KubeEdge v1.14.2/KubeEdge-Twitter-Demo/step5.md new file mode 100644 index 0000000..c44faae --- /dev/null +++ b/KubeEdge v1.14.2/KubeEdge-Twitter-Demo/step5.md @@ -0,0 +1,13 @@ +### Build the track player app +- Cross-complie the PiApp which will run on the RPi and play the desired track. + +#Pls give the appropriate arm version of your device + +``` +~/go/src/github.com/ke-twitter-demo$export GOARCH=arm +~/go/src/github.com/ke-twitter-demo$export GOOS="linux" +~/go/src/github.com/ke-twitter-demo$export GOARM=6 +~/go/src/github.com/ke-twitter-demo$export CGO_ENABLED=1 +~/go/src/github.com/ke-twitter-demo$export CC=arm-linux-gnueabi-gcc +~/go/src/github.com/ke-twitter-demo$ go build Pi_app/trackplayer.go +``` \ No newline at end of file diff --git a/KubeEdge v1.14.2/KubeEdge-Twitter-Demo/step6.md b/KubeEdge v1.14.2/KubeEdge-Twitter-Demo/step6.md new file mode 100644 index 0000000..ba07955 --- /dev/null +++ b/KubeEdge v1.14.2/KubeEdge-Twitter-Demo/step6.md @@ -0,0 +1,8 @@ +### Run the track player app +- Copy the trackplayer binary to the rpi. Make sure the MQTT broker is running on the rpi. + Run the binary. The app will subscribe to the `$hw/events/device/speaker-01/twin/update/document` topic + and when it receives the desired track on the topic, it will play it on the speaker. + +``` +./trackplayer +``` \ No newline at end of file diff --git a/KubeEdge v1.14.2/KubeEdge-Twitter-Demo/step7.md b/KubeEdge v1.14.2/KubeEdge-Twitter-Demo/step7.md new file mode 100644 index 0000000..59aebd9 --- /dev/null +++ b/KubeEdge v1.14.2/KubeEdge-Twitter-Demo/step7.md @@ -0,0 +1,8 @@ +### Tweet to play track + +- Login to twitter and tweet the track name you wish to play. Please tweet in the following format : + +``` +kubeedge play +``` +The track info is pushed to the rpi and the track is played on the speaker. \ No newline at end of file diff --git a/KubeEdge v1.14.2/README.md b/KubeEdge v1.14.2/README.md new file mode 100644 index 0000000..395f641 --- /dev/null +++ b/KubeEdge v1.14.2/README.md @@ -0,0 +1,9 @@ +# KubeEdge Killercoda-Scenerio + +We have created a tutorial in the interactive learning platform Killercoda for KubeEdge deployment. This can give a hands-on experience of KubeEdge deployment. The tutorial is created on KubeEdge release v1.14.2 + +![alt text](/Images/scenarios.png) + +This is available on + +Pls try it out!! diff --git a/KubeEdge v1.14.2/deployment/finish.md b/KubeEdge v1.14.2/deployment/finish.md new file mode 100644 index 0000000..efcb640 --- /dev/null +++ b/KubeEdge v1.14.2/deployment/finish.md @@ -0,0 +1 @@ +## Congratulations on successful completion of KubeEdge Deployment Scenario !!! \ No newline at end of file diff --git a/KubeEdge v1.14.2/deployment/index.json b/KubeEdge v1.14.2/deployment/index.json new file mode 100644 index 0000000..1598b5c --- /dev/null +++ b/KubeEdge v1.14.2/deployment/index.json @@ -0,0 +1,53 @@ +{ + "title": "KubeEdge Deployment", + "description": "Deploying KubeEdge", + "details": { + "steps": [ + { + "title": "Step 1/9", + "text": "step1.md" + }, + { + "title": "Step 2/9", + "text": "step2.md" + }, + { + "title": "Step 3/9", + "text": "step3.md" + }, + { + "title": "Step 4/9", + "text": "step4.md" + }, + { + "title": "Step 5/9", + "text": "step5.md" + }, + { + "title": "Step 6/9", + "text": "step6.md" + }, + { + "title": "Step 7/9", + "text": "step7.md" + }, + { + "title": "Step 8/9", + "text": "step8.md" + }, + { + "title": "Step 9/9", + "text": "step9.md" + } + ], + "intro": { + "text": "intro.md" + }, + "finish": { + "text": "finish.md" + } + }, + "backend": { + "imageid": "ubuntu" + } +} \ No newline at end of file diff --git a/KubeEdge v1.14.2/deployment/intro.md b/KubeEdge v1.14.2/deployment/intro.md new file mode 100644 index 0000000..4edd522 --- /dev/null +++ b/KubeEdge v1.14.2/deployment/intro.md @@ -0,0 +1,5 @@ +### Let's deploy KubeEdge in 10 mins + +
+ + KubeEdge is built upon Kubernetes and extends native containerized application orchestration and device management to hosts at the Edge. It consists of cloud part and edge part, and provides core infrastructure support for networking, application deployment and metadata synchronisation between cloud and edge \ No newline at end of file diff --git a/KubeEdge v1.14.2/deployment/step1.md b/KubeEdge v1.14.2/deployment/step1.md new file mode 100644 index 0000000..284747a --- /dev/null +++ b/KubeEdge v1.14.2/deployment/step1.md @@ -0,0 +1,21 @@ +# Install kind +
+Kind is a tool for running local Kubernetes clusters using Docker container “nodes”. + +Run the command below to intsall kind: +``` +curl -Lo ./kind "https://github.com/kubernetes-sigs/kind/releases/download/v0.14.0/kind-$(uname)-amd64" +chmod +x ./kind +mv ./kind /usr/local/bin/kind +```{{execute}} + +
+
+ +In order to manage the cluster later using the CLI, install Kubectl: + +``` +curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.22.6/bin/linux/amd64/kubectl +chmod +x ./kubectl +sudo mv ./kubectl /usr/local/bin/kubectl +```{{execute}} \ No newline at end of file diff --git a/KubeEdge v1.14.2/deployment/step2.md b/KubeEdge v1.14.2/deployment/step2.md new file mode 100644 index 0000000..4893b4b --- /dev/null +++ b/KubeEdge v1.14.2/deployment/step2.md @@ -0,0 +1,8 @@ +# Create cluster + +Run the command below to one-click create a cluster using kind. + +``` +sudo kind create cluster + +```{{execute}} \ No newline at end of file diff --git a/KubeEdge v1.14.2/deployment/step3.md b/KubeEdge v1.14.2/deployment/step3.md new file mode 100644 index 0000000..d7a238e --- /dev/null +++ b/KubeEdge v1.14.2/deployment/step3.md @@ -0,0 +1,14 @@ +# Setup keadm + +Keadm is used to install the cloud and edge components of KubeEdge. + +Run the command below to one-click install keadm. + +``` +wget https://github.com/kubeedge/kubeedge/releases/download/v1.14.2/keadm-v1.14.2-linux-amd64.tar.gz +tar -zxvf keadm-v1.14.2-linux-amd64.tar.gz +sudo cp keadm-v1.14.2-linux-amd64/keadm/keadm /usr/local/bin/keadm + +```{{execute}} + + diff --git a/KubeEdge v1.14.2/deployment/step4.md b/KubeEdge v1.14.2/deployment/step4.md new file mode 100644 index 0000000..5b2ef1a --- /dev/null +++ b/KubeEdge v1.14.2/deployment/step4.md @@ -0,0 +1,18 @@ +# Deploy cloudcore (on Master Node) + +keadm init will install cloudcore, generate the certs and install the CRDs. +--advertise-address (non-mandatory flag) is the address exposed by the cloud side (will be added to the SANs of the CloudCore certificate), the default value is the local IP. + +``` +sudo keadm deprecated init --advertise-address="CloudCore-IP" --kubeedge-version=1.14.2 --kube-config=/root/.kube/config + +```{{execute}} + +## check if cloudcore running successfully: + +``` +ps -elf | grep cloudcore + +```{{execute}} + +**Now you can see KubeEdge cloudcore is running.** \ No newline at end of file diff --git a/KubeEdge v1.14.2/deployment/step5.md b/KubeEdge v1.14.2/deployment/step5.md new file mode 100644 index 0000000..ca9c2fd --- /dev/null +++ b/KubeEdge v1.14.2/deployment/step5.md @@ -0,0 +1,72 @@ +# Setup edgecore(on Edge Node) + +In Kubernetes 1.23 and earlier, you could use Docker Engine with Kubernetes, relying on a built-in component of Kubernetes named dockershim. The dockershim component was removed in the Kubernetes 1.24 release; however, a third-party replacement, cri-dockerd, is available. The cri-dockerd adapter lets you use Docker Engine through the Container Runtime Interface. + +### Setup cri-dockerd + +``` +wget https://github.com/Mirantis/cri-dockerd/releases/download/v0.3.4/cri-dockerd-0.3.4.amd64.tgz +tar -xvf cri-dockerd-0.3.4.amd64.tgz +cd cri-dockerd/ +mkdir -p /usr/local/bin +install -o root -g root -m 0755 ./cri-dockerd /usr/local/bin/cri-dockerd + +```{{execute}} + +### Add the files cri-docker.socker cri-docker.service + +``` +sudo tee /etc/systemd/system/cri-docker.service << EOF +[Unit] +Description=CRI Interface for Docker Application Container Engine +Documentation=https://docs.mirantis.com +After=network-online.target firewalld.service docker.service +Wants=network-online.target +Requires=cri-docker.socket +[Service] +Type=notify +ExecStart=/usr/local/bin/cri-dockerd --container-runtime-endpoint fd:// --network-plugin= +ExecReload=/bin/kill -s HUP $MAINPID +TimeoutSec=0 +RestartSec=2 +Restart=always +StartLimitBurst=3 +StartLimitInterval=60s +LimitNOFILE=infinity +LimitNPROC=infinity +LimitCORE=infinity +TasksMax=infinity +Delegate=yes +KillMode=process +[Install] +WantedBy=multi-user.target +EOF + +sudo tee /etc/systemd/system/cri-docker.socket << EOF +[Unit] +Description=CRI Docker Socket for the API +PartOf=cri-docker.service +[Socket] +ListenStream=%t/cri-dockerd.sock +SocketMode=0660 +SocketUser=root +SocketGroup=docker +[Install] +WantedBy=sockets.target +EOF + + +```{{execute}} + +### Daemon reload + +``` +systemctl daemon-reload +systemctl enable cri-docker.service +systemctl enable --now cri-docker.socket +systemctl start cri-docker.service + +```{{execute}} + + + diff --git a/KubeEdge v1.14.2/deployment/step6.md b/KubeEdge v1.14.2/deployment/step6.md new file mode 100644 index 0000000..e71fae0 --- /dev/null +++ b/KubeEdge v1.14.2/deployment/step6.md @@ -0,0 +1,11 @@ +## Installing CNI plugin + +``` +wget https://github.com/containernetworking/plugins/releases/download/v1.3.0/cni-plugins-linux-amd64-v1.3.0.tgz + +mkdir -p /opt/cni/bin + +tar Cxzvf /opt/cni/bin cni-plugins-linux-amd64-v1.3.0.tgz + +```{{execute}} + diff --git a/KubeEdge v1.14.2/deployment/step7.md b/KubeEdge v1.14.2/deployment/step7.md new file mode 100644 index 0000000..b606141 --- /dev/null +++ b/KubeEdge v1.14.2/deployment/step7.md @@ -0,0 +1,11 @@ +### Setup keadm + +``` +wget https://github.com/kubeedge/kubeedge/releases/download/v1.14.2/keadm-v1.14.2-linux-amd64.tar.gz + +tar -zxvf keadm-v1.14.2-linux-amd64.tar.gz + +cp keadm-v1.14.2-linux-amd64/keadm/keadm /usr/local/bin/ + +```{{execute}} + diff --git a/KubeEdge v1.14.2/deployment/step8.md b/KubeEdge v1.14.2/deployment/step8.md new file mode 100644 index 0000000..fc4273c --- /dev/null +++ b/KubeEdge v1.14.2/deployment/step8.md @@ -0,0 +1,18 @@ +### Get token from cloud side (on Master Node) +``` +sudo keadm gettoken +```{{execute}} + +# On Edge +
+
+Next, run keadm join to join edge node. + +``` +sudo keadm join --cloudcore-ipport="Cloudcore-IP:10000" --token={token} --kubeedge-version=v1.14.2 --runtimetype=remote --remote-runtime-endpoint=unix:///var/run/cri-dockerd.sock + +```{{execute}} + +keadm join will install edgecore and mqtt, and --cloudcore-ipport flag is a mandatory flag. + +**Now you can see KubeEdge edgecore is running.** \ No newline at end of file diff --git a/KubeEdge v1.14.2/deployment/step9.md b/KubeEdge v1.14.2/deployment/step9.md new file mode 100644 index 0000000..aee3b6d --- /dev/null +++ b/KubeEdge v1.14.2/deployment/step9.md @@ -0,0 +1,12 @@ +# Check deloyment + +Check the state of nodes at the cloud machine: + +``` +kubectl get node + +```{{execute}} + +There are two nodes, one assumes the master role and the other assumes the edge role, indicating that the edge side has been managed and controlled by the cloud side as a node. + +**Congratulations!KubeEdge has been deployed!** \ No newline at end of file