This is the third in a series of posts on deploying and managing a Private Kubernetes Cluster in Azure.
Day 71 - The Current State of Kubernetes in Azure
Day 72 - Deploying a Private Kubernetes Cluster in Azure - Part 1
Day 73 - Deploying a Private Kubernetes Cluster in Azure - Part 2
Day 74 - Deploying a Private Kubernetes Cluster in Azure - Part 3
In today's article we will cover how to access the Private Kubernetes Cluster from an Azure Container Instance.
Options for connecting to a Private Kubernetes Cluster
Deploy a new Subnet in the Kubernetes VNet
Retrieve the IDs of the Private Kubernetes Cluster VNet and the new Subnet
Deploy a new Resource Group for the Kubernetes Jumpbox Container
Deploy the Kubernetes Jumpbox Container
Connect to the Kubernetes Jumpbox Container in the Azure Portal
Connect to the Private Kubernetes Cluster
Things to Consider
Conclusion
SPONSOR: Need to stop and start your development VMs on a schedule? The Azure Resource Scheduler let's you schedule up to 10 Azure VMs for FREE! Learn more HERE
The Microsoft recommended way of accessing Private Kubernetes Cluster, is to deploy a VM that is either on the same VNet as the Cluster or in a different VNet that is peered with the VNet that the Cluster is in.
Instead of using a VM, we are going to add a new Subnet in the existing Kubernetes Cluster VNet and then deploy an Azure Container Instance running a container image with kubectl already installed. Additionally, the variable values from Part 1 will be populated to the secured environment variables of the container at runtime which we will use to connect to the K8s Master Node and copy its kubeconfig file to the container, and then use it to manage the Kubernetes Cluster.
Retrieve the current name of the existing Kubernetes Cluster VNet.
K8S_VNET_NAME=$(az network vnet list \
--resource-group k8s-100days-iac \
--query [].name \
--output tsv)
Next, run the following command to deploy a new Subnet for the Kubernetes Jumpbox Container in the VNet called jumpbox-subnet.
DEPLOY_JUMPBOX_SUBNET=$(az network vnet subnet create \
--name jumpbox-subnet \
--vnet-name $K8S_VNET_NAME \
--resource-group k8s-100days-iac \
--address-prefixes 10.239.1.0/24)
Next, run the following command to verify that the Subnet was deployed successfully.
echo $DEPLOY_JUMPBOX_SUBNET | jq .provisioningState
You should get back the following response.
"Succeeded"
Next, run the following command to retrieve the ID of the Private Kubernetes Cluster VNet.
K8S_VNET_ID=$(az network vnet list \
--resource-group k8s-100days-iac \
--query [].id \
--output tsv)
Next, run the following command to retrieve the ID of jumpbox-subnet.
K8S_JUMPBOX_SUBNET_ID=$(echo $DEPLOY_JUMPBOX_SUBNET | jq .id | tr -d '"')
Run the following command to deploy a new Resource Group for the Kubernetes Jumpbox Container
/usr/bin/az group create \
--name "k8s-100days-iac-jumpbox" \
--location "westeurope"
You should get back the following output.
{
"id": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/k8s-100days-iac-jumpbox",
"location": "westeurope",
"managedBy": null,
"name": "k8s-100days-iac-jumpbox",
"properties": {
"provisioningState": "Succeeded"
},
"tags": null,
"type": "Microsoft.Resources/resourceGroups"
}
Next, run the following command to deploy an Azure Container Instance to connecting to the Kubernetes Cluster.
az container create \
--name k8s-jumpbox \
--resource-group k8s-100days-iac-jumpbox \
--image starkfell/k8s-jumpbox \
--ip-address private \
--vnet $K8S_VNET_ID \
--subnet $K8S_JUMPBOX_SUBNET_ID \
--secure-environment-variables \
"SSH_KEY_PASSWORD"="$SSH_KEY_PASSWORD" \
"K8S_SSH_PRIVATE_KEY"="$SSH_PRIVATE_KEY" \
"K8S_SSH_PRIVATE_KEY_NAME"="k8s-100days-iac-${RANDOM_ALPHA}"
The Container will take a few minutes to deploy as there are other networking resources involved when deploying to a virtual network as compared to deploying a standard container instance. and should showing a state of - Running ... Once it's finished deploying, you should see the following at the bottom of the output.
"osType": "Linux",
"provisioningState": "Succeeded",
"resourceGroup": "k8s-100days-iac-jumpbox",
"restartPolicy": "Always",
"tags": {},
"type": "Microsoft.ContainerInstance/containerGroups",
"volumes": null
}
NOTE: If for some reason, your Azure Container Instance is assigned a Private IP Address that isn't in the jumpbox-subnet, delete the Azure Container Instance and redeploy it. This only happened once during the writing of this article, but I wanted to provide a response in case it happened to you.
Next, open up a web browser and login to the Azure Portal. In the Subscription where you deployed your resources, browse to Resource Group --> k8s-100days-iac-jumpbox and then click on the Azure Container Instance, k8s-jumpbox. Next, under Settings, click on Containers and then click on the Connect Tab. You will be prompted to Choose Start Up Command, select /bin/bash and then click on the Connect button. Your view should be similar to what is shown below.
Copy and Paste the rest of the instructions that follow in the Console of the Jumpbox Container.
Next, run following command to echo out the SSH Private Key to a file from it's environment variable on the Azure Container Instance.
echo "$K8S_SSH_PRIVATE_KEY" > $K8S_SSH_PRIVATE_KEY_NAME && \
chmod 0600 $K8S_SSH_PRIVATE_KEY_NAME
Next, run the following command to retrieve the Master kubeconfig File from the Kubernetes Master Host.
sshpass -P "pass" \
-p $SSH_KEY_PASSWORD /usr/bin/scp \
-o "StrictHostKeyChecking=no" \
-o "UserKnownHostsFile=/dev/null" \
-i "$K8S_SSH_PRIVATE_KEY_NAME" \
[email protected]:/home/linuxadmin/.kube/config master-kubeconfig
You should get back the following response.
Warning: Permanently added '10.255.255.5' (ECDSA) to the list of known hosts.
Authorized uses only. All activity may be monitored and reported.
Next, run the following command to set kubectl to target the Private Kubernetes Cluster.
export KUBECONFIG=./master-kubeconfig
Next, run the following command to verify you can connect to the Cluster.
kubectl cluster-info
You should get back the following output.
Kubernetes master is running at https://10.255.255.5:443
CoreDNS is running at https://10.255.255.5:443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
kubernetes-dashboard is running at https://10.255.255.5:443/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy
Metrics-server is running at https://10.255.255.5:443/api/v1/namespaces/kube-system/services/https:metrics-server:/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
By using an Azure Container Instance, instead of a VM, you can more efficiently control what type of tools you want available on your Container Image through a Dockerfile and to customize how the container will be deployed in Azure using the Azure CLI, Azure PowerShell or ARM.
If you are deploying a Private Kubernetes Cluster into an existing VNet, the Kubernetes API Endpoint IP Address will be something other than 10.255.255.5.
In today's article we covered how to access the Private Kubernetes Cluster from an Azure Container Instance. If there's a specific scenario that you wish to be covered in future articles, please create a New Issue in the starkfell/100DaysOfIaC GitHub repository.