From 9bb0d927f372711ac37774067f74f4f9948af091 Mon Sep 17 00:00:00 2001 From: Federico Arambarri Date: Mon, 28 Aug 2023 17:01:53 -0300 Subject: [PATCH 1/6] Azure Workload Identity not in preview and aks readme version --- 01-prerequisites.md | 5 +---- README.md | 2 +- 2 files changed, 2 insertions(+), 5 deletions(-) diff --git a/01-prerequisites.md b/01-prerequisites.md index 5d07c5a1..9dada93a 100644 --- a/01-prerequisites.md +++ b/01-prerequisites.md @@ -30,16 +30,13 @@ This is the starting point for the instructions on deploying the [AKS baseline r 1. While the following feature(s) are still in _preview_, please enable them in your target subscription. - 1. [Register the Workload Identity preview feature = `EnableWorkloadIdentityPreview`](https://learn.microsoft.com/azure/aks/workload-identity-deploy-cluster#register-the-enableworkloadidentitypreview-feature-flag) - 1. [Register the ImageCleaner (Earser) preview feature = `EnableImageCleanerPreview`](https://learn.microsoft.com/azure/aks/image-cleaner#prerequisites) ```bash - az feature register --namespace "Microsoft.ContainerService" -n "EnableWorkloadIdentityPreview" az feature register --namespace "Microsoft.ContainerService" -n "EnableImageCleanerPreview" # Keep running until all say "Registered." (This may take up to 20 minutes.) - az feature list -o table --query "[?name=='Microsoft.ContainerService/EnableWorkloadIdentityPreview' || name=='Microsoft.ContainerService/EnableImageCleanerPreview'].{Name:name,State:properties.state}" + az feature list -o table --query "[?name=='Microsoft.ContainerService/EnableImageCleanerPreview'].{Name:name,State:properties.state}" # When all say "Registered" then re-register the AKS resource provider az provider register --namespace Microsoft.ContainerService diff --git a/README.md b/README.md index ac42e96b..9eda45ed 100644 --- a/README.md +++ b/README.md @@ -24,7 +24,7 @@ Finally, this implementation uses the [ASP.NET Core Docker sample web app](https #### Azure platform -- AKS v1.26 +- AKS v1.27 - System and User [node pool separation](https://learn.microsoft.com/azure/aks/use-system-pools) - [AKS-managed Azure AD](https://learn.microsoft.com/azure/aks/managed-aad) - Azure AD-backed Kubernetes RBAC (_local user accounts disabled_) From a33bb0f5fb38fd5e53753f59ffb672b24eeb6660 Mon Sep 17 00:00:00 2001 From: Federico Arambarri Date: Tue, 29 Aug 2023 08:53:42 -0300 Subject: [PATCH 2/6] Modification to be included in the env backup -saveenv.sh- --- 06-aks-cluster.md | 10 +++++----- 07-bootstrap-validation.md | 6 +++--- 09-secret-management-and-ingress-controller.md | 6 +++--- 11-validation.md | 6 +++--- 4 files changed, 14 insertions(+), 14 deletions(-) diff --git a/06-aks-cluster.md b/06-aks-cluster.md index d097b55e..02cc8a98 100644 --- a/06-aks-cluster.md +++ b/06-aks-cluster.md @@ -9,11 +9,11 @@ Now that your [ACR instance is deployed and ready to support cluster bootstrappi > If you cloned this repo, then the value will be the original mspnp GitHub organization's repo, which will mean that your cluster will be bootstraped using public container images. If instead you forked this repo, then the GitOps repo will be your own repo, and your cluster will be bootstrapped using container images references based on the values in your repo's manifest files. On the prior instruction page you had the opportunity to update those manifests to use your ACR instance. For guidance on using a private bootstrapping repo, see [Private bootstrapping repository](./cluster-manifests/README.md#private-bootstrapping-repository). ```bash - GITOPS_REPOURL=$(git config --get remote.origin.url) - echo GITOPS_REPOURL: $GITOPS_REPOURL + export GITOPS_REPOURL_AKS_BASELINE=$(git config --get remote.origin.url) + echo GITOPS_REPOURL_AKS_BASELINE: $GITOPS_REPOURL_AKS_BASELINE - GITOPS_CURRENT_BRANCH_NAME=$(git branch --show-current) - echo GITOPS_CURRENT_BRANCH_NAME: $GITOPS_CURRENT_BRANCH_NAME + export GITOPS_CURRENT_BRANCH_NAME_AKS_BASELINE=$(git branch --show-current) + echo GITOPS_CURRENT_BRANCH_NAME_AKS_BASELINE: $GITOPS_CURRENT_BRANCH_NAME_AKS_BASELINE ``` 1. Deploy the cluster ARM template. @@ -21,7 +21,7 @@ Now that your [ACR instance is deployed and ready to support cluster bootstrappi ```bash # [This takes about 18 minutes.] - az deployment group create -g rg-bu0001a0008 -f cluster-stamp.bicep -p targetVnetResourceId=${RESOURCEID_VNET_CLUSTERSPOKE_AKS_BASELINE} clusterAdminAadGroupObjectId=${AADOBJECTID_GROUP_CLUSTERADMIN_AKS_BASELINE} a0008NamespaceReaderAadGroupObjectId=${AADOBJECTID_GROUP_A0008_READER_AKS_BASELINE} k8sControlPlaneAuthorizationTenantId=${TENANTID_K8SRBAC_AKS_BASELINE} appGatewayListenerCertificate=${APP_GATEWAY_LISTENER_CERTIFICATE_AKS_BASELINE} aksIngressControllerCertificate=${AKS_INGRESS_CONTROLLER_CERTIFICATE_BASE64_AKS_BASELINE} domainName=${DOMAIN_NAME_AKS_BASELINE} gitOpsBootstrappingRepoHttpsUrl=${GITOPS_REPOURL} gitOpsBootstrappingRepoBranch=${GITOPS_CURRENT_BRANCH_NAME} location=eastus2 + az deployment group create -g rg-bu0001a0008 -f cluster-stamp.bicep -p targetVnetResourceId=${RESOURCEID_VNET_CLUSTERSPOKE_AKS_BASELINE} clusterAdminAadGroupObjectId=${AADOBJECTID_GROUP_CLUSTERADMIN_AKS_BASELINE} a0008NamespaceReaderAadGroupObjectId=${AADOBJECTID_GROUP_A0008_READER_AKS_BASELINE} k8sControlPlaneAuthorizationTenantId=${TENANTID_K8SRBAC_AKS_BASELINE} appGatewayListenerCertificate=${APP_GATEWAY_LISTENER_CERTIFICATE_AKS_BASELINE} aksIngressControllerCertificate=${AKS_INGRESS_CONTROLLER_CERTIFICATE_BASE64_AKS_BASELINE} domainName=${DOMAIN_NAME_AKS_BASELINE} gitOpsBootstrappingRepoHttpsUrl=${GITOPS_REPOURL_AKS_BASELINE} gitOpsBootstrappingRepoBranch=${GITOPS_CURRENT_BRANCH_NAME_AKS_BASELINE} location=eastus2 ``` > Alteratively, you could have updated the [`azuredeploy.parameters.prod.json`](./azuredeploy.parameters.prod.json) file and deployed as above, using `-p "@azuredeploy.parameters.prod.json"` instead of providing the individual key-value pairs. diff --git a/07-bootstrap-validation.md b/07-bootstrap-validation.md index 892377f3..b51b1e01 100644 --- a/07-bootstrap-validation.md +++ b/07-bootstrap-validation.md @@ -22,8 +22,8 @@ GitOps allows a team to author Kubernetes manifest files, persist them in their 1. Get the cluster name. ```bash - AKS_CLUSTER_NAME=$(az aks list -g rg-bu0001a0008 --query '[0].name' -o tsv) - echo AKS_CLUSTER_NAME: $AKS_CLUSTER_NAME + export AKS_CLUSTER_NAME_AKS_BASELINE=$(az aks list -g rg-bu0001a0008 --query '[0].name' -o tsv) + echo AKS_CLUSTER_NAME_AKS_BASELINE: $AKS_CLUSTER_NAME_AKS_BASELINE ``` 1. Get AKS `kubectl` credentials. @@ -33,7 +33,7 @@ GitOps allows a team to author Kubernetes manifest files, persist them in their > In a following step, you'll log in with a user that has been added to the Azure AD security group used to back the Kubernetes RBAC admin role. Executing the first `kubectl` command below will invoke the AAD login process to authorize the _user of your choice_, which will then be authenticated against Kubernetes RBAC to perform the action. The user you choose to log in with _must be a member of the AAD group bound_ to the `cluster-admin` ClusterRole. For simplicity you could either use the "break-glass" admin user created in [Azure Active Directory Integration](03-aad.md) (`bu0001a0008-admin`) or any user you assigned to the `cluster-admin` group assignment in your [`cluster-rbac.yaml`](cluster-manifests/cluster-rbac.yaml) file. ```bash - az aks get-credentials -g rg-bu0001a0008 -n $AKS_CLUSTER_NAME + az aks get-credentials -g rg-bu0001a0008 -n $AKS_CLUSTER_NAME_AKS_BASELINE ``` :warning: At this point two important steps are happening: diff --git a/09-secret-management-and-ingress-controller.md b/09-secret-management-and-ingress-controller.md index 7c92cb58..5feb28be 100644 --- a/09-secret-management-and-ingress-controller.md +++ b/09-secret-management-and-ingress-controller.md @@ -7,8 +7,8 @@ Previously you have configured [workload prerequisites](./08-workload-prerequisi 1. Get the AKS Ingress Controller Managed Identity details. ```bash - INGRESS_CONTROLLER_WORKLOAD_IDENTITY_CLIENT_ID=$(az deployment group show --resource-group rg-bu0001a0008 -n cluster-stamp --query properties.outputs.aksIngressControllerPodManagedIdentityClientId.value -o tsv) - echo INGRESS_CONTROLLER_WORKLOAD_IDENTITY_CLIENT_ID: $INGRESS_CONTROLLER_WORKLOAD_IDENTITY_CLIENT_ID + INGRESS_CONTROLLER_WORKLOAD_IDENTITY_CLIENT_ID_AKS_BASELINE=$(az deployment group show --resource-group rg-bu0001a0008 -n cluster-stamp --query properties.outputs.aksIngressControllerPodManagedIdentityClientId.value -o tsv) + echo INGRESS_CONTROLLER_WORKLOAD_IDENTITY_CLIENT_ID_AKS_BASELINE: $INGRESS_CONTROLLER_WORKLOAD_IDENTITY_CLIENT_ID_AKS_BASELINE ``` 1. Ensure your bootstrapping process has created the following namespace. @@ -34,7 +34,7 @@ Previously you have configured [workload prerequisites](./08-workload-prerequisi spec: provider: azure parameters: - clientID: $INGRESS_CONTROLLER_WORKLOAD_IDENTITY_CLIENT_ID + clientID: $INGRESS_CONTROLLER_WORKLOAD_IDENTITY_CLIENT_ID_AKS_BASELINE usePodIdentity: "false" useVMManagedIdentity: "false" keyvaultName: $KEYVAULT_NAME_AKS_BASELINE diff --git a/11-validation.md b/11-validation.md index 3e3580ef..e50bc6e8 100644 --- a/11-validation.md +++ b/11-validation.md @@ -14,15 +14,15 @@ This section will help you to validate the workload is exposed correctly and res ```bash # query the Azure Application Gateway Public Ip - APPGW_PUBLIC_IP=$(az deployment group show --resource-group rg-enterprise-networking-spokes -n spoke-BU0001A0008 --query properties.outputs.appGwPublicIpAddress.value -o tsv) - echo APPGW_PUBLIC_IP: $APPGW_PUBLIC_IP + APPGW_PUBLIC_IP_AKS_BASELINE=$(az deployment group show --resource-group rg-enterprise-networking-spokes -n spoke-BU0001A0008 --query properties.outputs.appGwPublicIpAddress.value -o tsv) + echo APPGW_PUBLIC_IP_AKS_BASELINE: $APPGW_PUBLIC_IP_AKS_BASELINE ``` 1. Create an `A` record for DNS. > :bulb: You can simulate this via a local hosts file modification. You're welcome to add a real DNS entry for your specific deployment's application domain name, if you have access to do so. - Map the Azure Application Gateway public IP address to the application domain name. To do that, please edit your hosts file (`C:\Windows\System32\drivers\etc\hosts` or `/etc/hosts`) and add the following record to the end: `${APPGW_PUBLIC_IP} bicycle.${DOMAIN_NAME_AKS_BASELINE}` (e.g. `50.140.130.120 bicycle.contoso.com`) + Map the Azure Application Gateway public IP address to the application domain name. To do that, please edit your hosts file (`C:\Windows\System32\drivers\etc\hosts` or `/etc/hosts`) and add the following record to the end: `${APPGW_PUBLIC_IP_AKS_BASELINE} bicycle.${DOMAIN_NAME_AKS_BASELINE}` (e.g. `50.140.130.120 bicycle.contoso.com`) 1. Browse to the site (e.g. ). From 33e4a961066ad6d8556c322d6cd840e518161049 Mon Sep 17 00:00:00 2001 From: Federico Arambarri Date: Wed, 30 Aug 2023 12:48:33 -0300 Subject: [PATCH 3/6] adding variables the possibility to be saved by saveenv.sh --- 04-networking.md | 12 ++++++------ 09-secret-management-and-ingress-controller.md | 2 +- 11-validation.md | 2 +- 3 files changed, 8 insertions(+), 8 deletions(-) diff --git a/04-networking.md b/04-networking.md index 6f2758c5..328893c6 100644 --- a/04-networking.md +++ b/04-networking.md @@ -75,11 +75,11 @@ The following two resource groups will be created and populated with networking > :book: The networking team receives a request from an app team in business unit (BU) 0001 for a network spoke to house their new AKS-based application (Internally know as Application ID: A0008). The network team talks with the app team to understand their requirements and aligns those needs with Microsoft's best practices for a general-purpose AKS cluster deployment. They capture those specific requirements and deploy the spoke, aligning to those specs, and connecting it to the matching regional hub. ```bash - RESOURCEID_VNET_HUB=$(az deployment group show -g rg-enterprise-networking-hubs -n hub-default --query properties.outputs.hubVnetId.value -o tsv) - echo RESOURCEID_VNET_HUB: $RESOURCEID_VNET_HUB + export RESOURCEID_VNET_HUB_AKS_BASELINE=$(az deployment group show -g rg-enterprise-networking-hubs -n hub-default --query properties.outputs.hubVnetId.value -o tsv) + echo RESOURCEID_VNET_HUB_AKS_BASELINE: $RESOURCEID_VNET_HUB_AKS_BASELINE # [This takes about four minutes to run.] - az deployment group create -g rg-enterprise-networking-spokes -f networking/spoke-BU0001A0008.bicep -p location=eastus2 hubVnetResourceId="${RESOURCEID_VNET_HUB}" + az deployment group create -g rg-enterprise-networking-spokes -f networking/spoke-BU0001A0008.bicep -p location=eastus2 hubVnetResourceId="${RESOURCEID_VNET_HUB_AKS_BASELINE}" ``` The spoke creation will emit the following: @@ -93,11 +93,11 @@ The following two resource groups will be created and populated with networking > :book: Now that their regional hub has its first spoke, the hub can no longer run off of the generic hub template. The networking team creates a named hub template (e.g. `hub-eastus2.bicep`) to forever represent this specific hub and the features this specific hub needs in order to support its spokes' requirements. As new spokes are attached and new requirements arise for the regional hub, they will be added to this template file. ```bash - RESOURCEID_SUBNET_NODEPOOLS=$(az deployment group show -g rg-enterprise-networking-spokes -n spoke-BU0001A0008 --query properties.outputs.nodepoolSubnetResourceIds.value -o json) - echo RESOURCEID_SUBNET_NODEPOOLS: $RESOURCEID_SUBNET_NODEPOOLS + export RESOURCEID_SUBNET_NODEPOOLS_AKS_BASELINE=$(az deployment group show -g rg-enterprise-networking-spokes -n spoke-BU0001A0008 --query properties.outputs.nodepoolSubnetResourceIds.value -o json) + echo RESOURCEID_SUBNET_NODEPOOLS_AKS_BASELINE: $RESOURCEID_SUBNET_NODEPOOLS_AKS_BASELINE # [This takes about ten minutes to run.] - az deployment group create -g rg-enterprise-networking-hubs -f networking/hub-regionA.bicep -p location=eastus2 nodepoolSubnetResourceIds="${RESOURCEID_SUBNET_NODEPOOLS}" + az deployment group create -g rg-enterprise-networking-hubs -f networking/hub-regionA.bicep -p location=eastus2 nodepoolSubnetResourceIds="${RESOURCEID_SUBNET_NODEPOOLS_AKS_BASELINE}" ``` > :book: At this point the networking team has delivered a spoke in which BU 0001's app team can lay down their AKS cluster (ID: A0008). The networking team provides the necessary information to the app team for them to reference in their infrastructure-as-code artifacts. diff --git a/09-secret-management-and-ingress-controller.md b/09-secret-management-and-ingress-controller.md index 5feb28be..559552e2 100644 --- a/09-secret-management-and-ingress-controller.md +++ b/09-secret-management-and-ingress-controller.md @@ -7,7 +7,7 @@ Previously you have configured [workload prerequisites](./08-workload-prerequisi 1. Get the AKS Ingress Controller Managed Identity details. ```bash - INGRESS_CONTROLLER_WORKLOAD_IDENTITY_CLIENT_ID_AKS_BASELINE=$(az deployment group show --resource-group rg-bu0001a0008 -n cluster-stamp --query properties.outputs.aksIngressControllerPodManagedIdentityClientId.value -o tsv) + export INGRESS_CONTROLLER_WORKLOAD_IDENTITY_CLIENT_ID_AKS_BASELINE=$(az deployment group show --resource-group rg-bu0001a0008 -n cluster-stamp --query properties.outputs.aksIngressControllerPodManagedIdentityClientId.value -o tsv) echo INGRESS_CONTROLLER_WORKLOAD_IDENTITY_CLIENT_ID_AKS_BASELINE: $INGRESS_CONTROLLER_WORKLOAD_IDENTITY_CLIENT_ID_AKS_BASELINE ``` diff --git a/11-validation.md b/11-validation.md index e50bc6e8..224c736e 100644 --- a/11-validation.md +++ b/11-validation.md @@ -14,7 +14,7 @@ This section will help you to validate the workload is exposed correctly and res ```bash # query the Azure Application Gateway Public Ip - APPGW_PUBLIC_IP_AKS_BASELINE=$(az deployment group show --resource-group rg-enterprise-networking-spokes -n spoke-BU0001A0008 --query properties.outputs.appGwPublicIpAddress.value -o tsv) + export APPGW_PUBLIC_IP_AKS_BASELINE=$(az deployment group show --resource-group rg-enterprise-networking-spokes -n spoke-BU0001A0008 --query properties.outputs.appGwPublicIpAddress.value -o tsv) echo APPGW_PUBLIC_IP_AKS_BASELINE: $APPGW_PUBLIC_IP_AKS_BASELINE ``` From 53d2e95c72ad0dbaf85a275c294699794106283f Mon Sep 17 00:00:00 2001 From: Federico Arambarri Date: Wed, 30 Aug 2023 14:25:54 -0300 Subject: [PATCH 4/6] kuredd update --- 05-bootstrap-prep.md | 2 +- cluster-manifests/cluster-baseline-settings/kured.yaml | 9 +++++---- 2 files changed, 6 insertions(+), 5 deletions(-) diff --git a/05-bootstrap-prep.md b/05-bootstrap-prep.md index 65210fc0..0bee73a7 100644 --- a/05-bootstrap-prep.md +++ b/05-bootstrap-prep.md @@ -58,7 +58,7 @@ In addition to ACR being deployed to support bootstrapping, this is where any ot echo ACR_NAME_AKS_BASELINE: $ACR_NAME_AKS_BASELINE # Import core image(s) hosted in public container registries to be used during bootstrapping - az acr import --source ghcr.io/kubereboot/kured:1.12.0 -n $ACR_NAME_AKS_BASELINE + az acr import --source ghcr.io/kubereboot/kured:1.14.0 -n $ACR_NAME_AKS_BASELINE ``` > In this walkthrough, there is only one image that is included in the bootstrapping process. It's included as an reference for this process. Your choice to use Kubernetes Reboot Daemon (Kured) or any other images, including helm charts, as part of your bootstrapping is yours to make. diff --git a/cluster-manifests/cluster-baseline-settings/kured.yaml b/cluster-manifests/cluster-baseline-settings/kured.yaml index c1bf07a6..84583715 100644 --- a/cluster-manifests/cluster-baseline-settings/kured.yaml +++ b/cluster-manifests/cluster-baseline-settings/kured.yaml @@ -1,4 +1,4 @@ -# Source: https://github.com/kubereboot/charts/tree/kured-4.2.0/charts/kured (1.12.0) +# Source: https://github.com/kubereboot/charts/tree/kured-5.2.0/charts/kured (1.14.0) apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: @@ -81,6 +81,7 @@ metadata: name: kured # Must match `--ds-name` namespace: cluster-baseline-settings # Must match `--ds-namespace` spec: + revisionHistoryLimit: 10 selector: matchLabels: app.kubernetes.io/name: kured @@ -118,10 +119,10 @@ spec: # PRODUCTION READINESS CHANGE REQUIRED # This image should be sourced from a non-public container registry, such as the # one deployed along side of this reference implementation. - # az acr import --source ghcr.io/kubereboot/kured:1.12.0 -n + # az acr import --source ghcr.io/kubereboot/kured:1.14.0 -n # and then set this to - # image: .azurecr.io/kubereboot/kured:1.12.0 - image: ghcr.io/kubereboot/kured:1.12.0 + # image: .azurecr.io/kubereboot/kured:1.14.0 + image: ghcr.io/kubereboot/kured:1.14.0 imagePullPolicy: IfNotPresent securityContext: privileged: true # Give permission to nsenter /proc/1/ns/mnt From 8b335ca90fb3e0b6c962773ea2568be1451a2f50 Mon Sep 17 00:00:00 2001 From: Federico Arambarri Date: Fri, 1 Sep 2023 13:36:42 -0300 Subject: [PATCH 5/6] traefik --- 09-secret-management-and-ingress-controller.md | 2 +- README.md | 2 +- workload/traefik.yaml | 6 +++--- 3 files changed, 5 insertions(+), 5 deletions(-) diff --git a/09-secret-management-and-ingress-controller.md b/09-secret-management-and-ingress-controller.md index 559552e2..99a7888e 100644 --- a/09-secret-management-and-ingress-controller.md +++ b/09-secret-management-and-ingress-controller.md @@ -58,7 +58,7 @@ Previously you have configured [workload prerequisites](./08-workload-prerequisi ```bash # Import ingress controller image hosted in public container registries - az acr import --source docker.io/library/traefik:v2.9.6 -n $ACR_NAME_AKS_BASELINE + az acr import --source docker.io/library/traefik:v2.10.4 -n $ACR_NAME_AKS_BASELINE ``` 1. Install the Traefik Ingress Controller. diff --git a/README.md b/README.md index 9eda45ed..8108a2dd 100644 --- a/README.md +++ b/README.md @@ -43,7 +43,7 @@ Finally, this implementation uses the [ASP.NET Core Docker sample web app](https - [ImageCleaner (Eraser)](https://learn.microsoft.com/azure/aks/image-cleaner) _[AKS-managed add-on]_ - [Kubernetes Reboot Daemon](https://learn.microsoft.com/azure/aks/node-updates-kured) - [Secrets Store CSI Driver for Kubernetes](https://learn.microsoft.com/azure/aks/csi-secrets-store-driver) _[AKS-managed add-on]_ -- [Traefik Ingress Controller](https://doc.traefik.io/traefik/v2.5/routing/providers/kubernetes-ingress/) +- [Traefik Ingress Controller](https://doc.traefik.io/traefik/v2.10/routing/providers/kubernetes-ingress/) ![Network diagram depicting a hub-spoke network with two peered VNets and main Azure resources used in the architecture.](https://learn.microsoft.com/azure/architecture/reference-architectures/containers/aks/images/secure-baseline-architecture.svg) diff --git a/workload/traefik.yaml b/workload/traefik.yaml index 53520396..b477e532 100644 --- a/workload/traefik.yaml +++ b/workload/traefik.yaml @@ -228,10 +228,10 @@ spec: # PRODUCTION READINESS CHANGE REQUIRED # This image should be sourced from a non-public container registry, such as the # one deployed along side of this reference implementation. - # az acr import --source docker.io/library/traefik:v2.9.6 -n + # az acr import --source docker.io/library/traefik:v2.10.4-n # and then set this to - # image: .azurecr.io/library/traefik:v2.9.6 - - image: docker.io/library/traefik:v2.9.6 + # image: .azurecr.io/library/traefik:v2.10.4 + - image: docker.io/library/traefik:v2.10.4 imagePullPolicy: IfNotPresent name: traefik-ingress-controller resources: From 93073242a43c1626cc3b1744f9e0687f684796ec Mon Sep 17 00:00:00 2001 From: Federico Arambarri Date: Thu, 7 Sep 2023 13:49:29 -0300 Subject: [PATCH 6/6] Reverting variable name changes --- 04-networking.md | 12 ++++++------ 06-aks-cluster.md | 10 +++++----- 07-bootstrap-validation.md | 6 +++--- 09-secret-management-and-ingress-controller.md | 6 +++--- 11-validation.md | 6 +++--- 5 files changed, 20 insertions(+), 20 deletions(-) diff --git a/04-networking.md b/04-networking.md index 328893c6..6f2758c5 100644 --- a/04-networking.md +++ b/04-networking.md @@ -75,11 +75,11 @@ The following two resource groups will be created and populated with networking > :book: The networking team receives a request from an app team in business unit (BU) 0001 for a network spoke to house their new AKS-based application (Internally know as Application ID: A0008). The network team talks with the app team to understand their requirements and aligns those needs with Microsoft's best practices for a general-purpose AKS cluster deployment. They capture those specific requirements and deploy the spoke, aligning to those specs, and connecting it to the matching regional hub. ```bash - export RESOURCEID_VNET_HUB_AKS_BASELINE=$(az deployment group show -g rg-enterprise-networking-hubs -n hub-default --query properties.outputs.hubVnetId.value -o tsv) - echo RESOURCEID_VNET_HUB_AKS_BASELINE: $RESOURCEID_VNET_HUB_AKS_BASELINE + RESOURCEID_VNET_HUB=$(az deployment group show -g rg-enterprise-networking-hubs -n hub-default --query properties.outputs.hubVnetId.value -o tsv) + echo RESOURCEID_VNET_HUB: $RESOURCEID_VNET_HUB # [This takes about four minutes to run.] - az deployment group create -g rg-enterprise-networking-spokes -f networking/spoke-BU0001A0008.bicep -p location=eastus2 hubVnetResourceId="${RESOURCEID_VNET_HUB_AKS_BASELINE}" + az deployment group create -g rg-enterprise-networking-spokes -f networking/spoke-BU0001A0008.bicep -p location=eastus2 hubVnetResourceId="${RESOURCEID_VNET_HUB}" ``` The spoke creation will emit the following: @@ -93,11 +93,11 @@ The following two resource groups will be created and populated with networking > :book: Now that their regional hub has its first spoke, the hub can no longer run off of the generic hub template. The networking team creates a named hub template (e.g. `hub-eastus2.bicep`) to forever represent this specific hub and the features this specific hub needs in order to support its spokes' requirements. As new spokes are attached and new requirements arise for the regional hub, they will be added to this template file. ```bash - export RESOURCEID_SUBNET_NODEPOOLS_AKS_BASELINE=$(az deployment group show -g rg-enterprise-networking-spokes -n spoke-BU0001A0008 --query properties.outputs.nodepoolSubnetResourceIds.value -o json) - echo RESOURCEID_SUBNET_NODEPOOLS_AKS_BASELINE: $RESOURCEID_SUBNET_NODEPOOLS_AKS_BASELINE + RESOURCEID_SUBNET_NODEPOOLS=$(az deployment group show -g rg-enterprise-networking-spokes -n spoke-BU0001A0008 --query properties.outputs.nodepoolSubnetResourceIds.value -o json) + echo RESOURCEID_SUBNET_NODEPOOLS: $RESOURCEID_SUBNET_NODEPOOLS # [This takes about ten minutes to run.] - az deployment group create -g rg-enterprise-networking-hubs -f networking/hub-regionA.bicep -p location=eastus2 nodepoolSubnetResourceIds="${RESOURCEID_SUBNET_NODEPOOLS_AKS_BASELINE}" + az deployment group create -g rg-enterprise-networking-hubs -f networking/hub-regionA.bicep -p location=eastus2 nodepoolSubnetResourceIds="${RESOURCEID_SUBNET_NODEPOOLS}" ``` > :book: At this point the networking team has delivered a spoke in which BU 0001's app team can lay down their AKS cluster (ID: A0008). The networking team provides the necessary information to the app team for them to reference in their infrastructure-as-code artifacts. diff --git a/06-aks-cluster.md b/06-aks-cluster.md index 02cc8a98..d097b55e 100644 --- a/06-aks-cluster.md +++ b/06-aks-cluster.md @@ -9,11 +9,11 @@ Now that your [ACR instance is deployed and ready to support cluster bootstrappi > If you cloned this repo, then the value will be the original mspnp GitHub organization's repo, which will mean that your cluster will be bootstraped using public container images. If instead you forked this repo, then the GitOps repo will be your own repo, and your cluster will be bootstrapped using container images references based on the values in your repo's manifest files. On the prior instruction page you had the opportunity to update those manifests to use your ACR instance. For guidance on using a private bootstrapping repo, see [Private bootstrapping repository](./cluster-manifests/README.md#private-bootstrapping-repository). ```bash - export GITOPS_REPOURL_AKS_BASELINE=$(git config --get remote.origin.url) - echo GITOPS_REPOURL_AKS_BASELINE: $GITOPS_REPOURL_AKS_BASELINE + GITOPS_REPOURL=$(git config --get remote.origin.url) + echo GITOPS_REPOURL: $GITOPS_REPOURL - export GITOPS_CURRENT_BRANCH_NAME_AKS_BASELINE=$(git branch --show-current) - echo GITOPS_CURRENT_BRANCH_NAME_AKS_BASELINE: $GITOPS_CURRENT_BRANCH_NAME_AKS_BASELINE + GITOPS_CURRENT_BRANCH_NAME=$(git branch --show-current) + echo GITOPS_CURRENT_BRANCH_NAME: $GITOPS_CURRENT_BRANCH_NAME ``` 1. Deploy the cluster ARM template. @@ -21,7 +21,7 @@ Now that your [ACR instance is deployed and ready to support cluster bootstrappi ```bash # [This takes about 18 minutes.] - az deployment group create -g rg-bu0001a0008 -f cluster-stamp.bicep -p targetVnetResourceId=${RESOURCEID_VNET_CLUSTERSPOKE_AKS_BASELINE} clusterAdminAadGroupObjectId=${AADOBJECTID_GROUP_CLUSTERADMIN_AKS_BASELINE} a0008NamespaceReaderAadGroupObjectId=${AADOBJECTID_GROUP_A0008_READER_AKS_BASELINE} k8sControlPlaneAuthorizationTenantId=${TENANTID_K8SRBAC_AKS_BASELINE} appGatewayListenerCertificate=${APP_GATEWAY_LISTENER_CERTIFICATE_AKS_BASELINE} aksIngressControllerCertificate=${AKS_INGRESS_CONTROLLER_CERTIFICATE_BASE64_AKS_BASELINE} domainName=${DOMAIN_NAME_AKS_BASELINE} gitOpsBootstrappingRepoHttpsUrl=${GITOPS_REPOURL_AKS_BASELINE} gitOpsBootstrappingRepoBranch=${GITOPS_CURRENT_BRANCH_NAME_AKS_BASELINE} location=eastus2 + az deployment group create -g rg-bu0001a0008 -f cluster-stamp.bicep -p targetVnetResourceId=${RESOURCEID_VNET_CLUSTERSPOKE_AKS_BASELINE} clusterAdminAadGroupObjectId=${AADOBJECTID_GROUP_CLUSTERADMIN_AKS_BASELINE} a0008NamespaceReaderAadGroupObjectId=${AADOBJECTID_GROUP_A0008_READER_AKS_BASELINE} k8sControlPlaneAuthorizationTenantId=${TENANTID_K8SRBAC_AKS_BASELINE} appGatewayListenerCertificate=${APP_GATEWAY_LISTENER_CERTIFICATE_AKS_BASELINE} aksIngressControllerCertificate=${AKS_INGRESS_CONTROLLER_CERTIFICATE_BASE64_AKS_BASELINE} domainName=${DOMAIN_NAME_AKS_BASELINE} gitOpsBootstrappingRepoHttpsUrl=${GITOPS_REPOURL} gitOpsBootstrappingRepoBranch=${GITOPS_CURRENT_BRANCH_NAME} location=eastus2 ``` > Alteratively, you could have updated the [`azuredeploy.parameters.prod.json`](./azuredeploy.parameters.prod.json) file and deployed as above, using `-p "@azuredeploy.parameters.prod.json"` instead of providing the individual key-value pairs. diff --git a/07-bootstrap-validation.md b/07-bootstrap-validation.md index b51b1e01..892377f3 100644 --- a/07-bootstrap-validation.md +++ b/07-bootstrap-validation.md @@ -22,8 +22,8 @@ GitOps allows a team to author Kubernetes manifest files, persist them in their 1. Get the cluster name. ```bash - export AKS_CLUSTER_NAME_AKS_BASELINE=$(az aks list -g rg-bu0001a0008 --query '[0].name' -o tsv) - echo AKS_CLUSTER_NAME_AKS_BASELINE: $AKS_CLUSTER_NAME_AKS_BASELINE + AKS_CLUSTER_NAME=$(az aks list -g rg-bu0001a0008 --query '[0].name' -o tsv) + echo AKS_CLUSTER_NAME: $AKS_CLUSTER_NAME ``` 1. Get AKS `kubectl` credentials. @@ -33,7 +33,7 @@ GitOps allows a team to author Kubernetes manifest files, persist them in their > In a following step, you'll log in with a user that has been added to the Azure AD security group used to back the Kubernetes RBAC admin role. Executing the first `kubectl` command below will invoke the AAD login process to authorize the _user of your choice_, which will then be authenticated against Kubernetes RBAC to perform the action. The user you choose to log in with _must be a member of the AAD group bound_ to the `cluster-admin` ClusterRole. For simplicity you could either use the "break-glass" admin user created in [Azure Active Directory Integration](03-aad.md) (`bu0001a0008-admin`) or any user you assigned to the `cluster-admin` group assignment in your [`cluster-rbac.yaml`](cluster-manifests/cluster-rbac.yaml) file. ```bash - az aks get-credentials -g rg-bu0001a0008 -n $AKS_CLUSTER_NAME_AKS_BASELINE + az aks get-credentials -g rg-bu0001a0008 -n $AKS_CLUSTER_NAME ``` :warning: At this point two important steps are happening: diff --git a/09-secret-management-and-ingress-controller.md b/09-secret-management-and-ingress-controller.md index 99a7888e..2e604546 100644 --- a/09-secret-management-and-ingress-controller.md +++ b/09-secret-management-and-ingress-controller.md @@ -7,8 +7,8 @@ Previously you have configured [workload prerequisites](./08-workload-prerequisi 1. Get the AKS Ingress Controller Managed Identity details. ```bash - export INGRESS_CONTROLLER_WORKLOAD_IDENTITY_CLIENT_ID_AKS_BASELINE=$(az deployment group show --resource-group rg-bu0001a0008 -n cluster-stamp --query properties.outputs.aksIngressControllerPodManagedIdentityClientId.value -o tsv) - echo INGRESS_CONTROLLER_WORKLOAD_IDENTITY_CLIENT_ID_AKS_BASELINE: $INGRESS_CONTROLLER_WORKLOAD_IDENTITY_CLIENT_ID_AKS_BASELINE + INGRESS_CONTROLLER_WORKLOAD_IDENTITY_CLIENT_ID=$(az deployment group show --resource-group rg-bu0001a0008 -n cluster-stamp --query properties.outputs.aksIngressControllerPodManagedIdentityClientId.value -o tsv) + echo INGRESS_CONTROLLER_WORKLOAD_IDENTITY_CLIENT_ID: $INGRESS_CONTROLLER_WORKLOAD_IDENTITY_CLIENT_ID ``` 1. Ensure your bootstrapping process has created the following namespace. @@ -34,7 +34,7 @@ Previously you have configured [workload prerequisites](./08-workload-prerequisi spec: provider: azure parameters: - clientID: $INGRESS_CONTROLLER_WORKLOAD_IDENTITY_CLIENT_ID_AKS_BASELINE + clientID: $INGRESS_CONTROLLER_WORKLOAD_IDENTITY_CLIENT_ID usePodIdentity: "false" useVMManagedIdentity: "false" keyvaultName: $KEYVAULT_NAME_AKS_BASELINE diff --git a/11-validation.md b/11-validation.md index 224c736e..3e3580ef 100644 --- a/11-validation.md +++ b/11-validation.md @@ -14,15 +14,15 @@ This section will help you to validate the workload is exposed correctly and res ```bash # query the Azure Application Gateway Public Ip - export APPGW_PUBLIC_IP_AKS_BASELINE=$(az deployment group show --resource-group rg-enterprise-networking-spokes -n spoke-BU0001A0008 --query properties.outputs.appGwPublicIpAddress.value -o tsv) - echo APPGW_PUBLIC_IP_AKS_BASELINE: $APPGW_PUBLIC_IP_AKS_BASELINE + APPGW_PUBLIC_IP=$(az deployment group show --resource-group rg-enterprise-networking-spokes -n spoke-BU0001A0008 --query properties.outputs.appGwPublicIpAddress.value -o tsv) + echo APPGW_PUBLIC_IP: $APPGW_PUBLIC_IP ``` 1. Create an `A` record for DNS. > :bulb: You can simulate this via a local hosts file modification. You're welcome to add a real DNS entry for your specific deployment's application domain name, if you have access to do so. - Map the Azure Application Gateway public IP address to the application domain name. To do that, please edit your hosts file (`C:\Windows\System32\drivers\etc\hosts` or `/etc/hosts`) and add the following record to the end: `${APPGW_PUBLIC_IP_AKS_BASELINE} bicycle.${DOMAIN_NAME_AKS_BASELINE}` (e.g. `50.140.130.120 bicycle.contoso.com`) + Map the Azure Application Gateway public IP address to the application domain name. To do that, please edit your hosts file (`C:\Windows\System32\drivers\etc\hosts` or `/etc/hosts`) and add the following record to the end: `${APPGW_PUBLIC_IP} bicycle.${DOMAIN_NAME_AKS_BASELINE}` (e.g. `50.140.130.120 bicycle.contoso.com`) 1. Browse to the site (e.g. ).