This repo will walkthrough deploying a workload and setting up AIO Data Processor for manipulating and sending data to the cloud.
Refer to azure-samples/azure-edge-extensions-aio-iac-terraform for instructions on how to get an AIO environment installed using Terraform.
- (Optionally for Windows) WSL installed and setup.
- Azure CLI available on the command line where this will be deployed.
- Terraform available on the command line where this will be deployed.
- Docker available on the command line.
- (Optional) Owner access to a Subscription to deploy the infrastructure.
- (Or) Owner access to a Resource Group with an existing cluster configured and connected to Azure Arc.
- azure-samples/azure-edge-extensions-aio-iac-terraform installed and deployed
This will first log in to the Azure tenant where your Azure Arc cluster is deployed. It will then setup an Azure Arc Proxy and connect to a Pod running your cluster to run mqttui.
- Log in to the AZ CLI:
az login --tenant <tenant>.onmicrosoft.com
- Make sure your subscription is the one that you would like to use:
az account show
. - Change to the subscription that you would like to use if needed:
az account set -s <subscription-id>
- Make sure your subscription is the one that you would like to use:
- Start the Azure Arc proxy on your local machine to access the Kuberenetes cluster:
az connectedk8s proxy -g rg-<name> -n arc-<name>
- Exec into the mqtt-client Kubernetes Pod that was deployed from
the azure-samples/azure-edge-extensions-aio-iac-terraform
repo.
kubectl exec -it deployments/mqtt-client -c mqtt-client -n aio -- sh
- Run
mqttui
from this new exec'd console running in your Kubernetes Pod.mqttui -b mqtts://aio-mq-dmqtt-frontend:8883 -u '$sat' --password $(cat /var/run/secrets/tokens/mq-sat) --insecure
- Add a
root-<unique-name>.tfvars
file to the root of the deploy directory that contains the following ( refer to deploy/sample-aio.general.tfvars.example for an example):// <project-root>/deploy/root-<unique-name>.tfvars name = "<unique-name>" location = "<location>"
- From the deploy/1-infra directory execute the following (the
<unique-name>.auto.tfvars
created earlier will automatically be applied):terraform init terraform apply -var-file="../root-<unique-name>.tfvars"
- This will setup the Azure cloud resources needed for this repo.
- This step will also output a
acr-pull-secret.sh
to a new out directory.
- Open the out/acr-pull-secret.sh that was created from the previous step, copy its contents
and run them on the command line.
eval "$(./out/acr-pull-secret.sh)"
- This will add a new Secret to your cluster that contains the Service Principal with AcrPull permissions which will be used by your Kubernetes cluster to pull images from your new ACR to your cluster.
- Log in to your new ACR, the previous step removed any hyphens from your
name
variable so be sure to remove them when you log in:az acr login --name acr<name with no hyphens>
- Build and push the MqttSink that's in this project.
docker buildx build --platform linux/amd64 -t acr<name with no hyphens>.azurecr.io/mqttsink:0.0.1 --push -f src/MqttSink/Dockerfile .
This will deploy the Dapr Helm chart using the AIO Orchestrator. It will also install the Dapr PubSub and StateStore Pluggable components.
- Repeat the same
terraform
commands for the deploy/2-aio-dapr directory:terraform init terraform apply -var-file="../root-<unique-name>.tfvars"
This will deploy the new workload that was built and pushed to ACR using the AIO Orchestrator. It will be deployed with Dapr side cars which will allow the MqttSink to subscribe and publish to topics on the AIO MQ broker.
- Add a
<unique-name>.auto.tfvars
to the deploy/3-aio-mqtt-sink directory that contains the following variables (refer to the deploy/3-aio-mqtt-sink/sample-aio.auto.tfvars.example for an example):// <project-root>/deploy/3-aio-mqtt-sink/<unique-name>.auto.tfvars mqtt_sink_version = "0.0.1"
- Repeat the same
terraform
commands for the deploy/3-aio-mqtt-sink directory:terraform init terraform apply -var-file="../root-<unique-name>.tfvars"