Skip to content

Commit

Permalink
feat: Add support for AppMesh controller addon with AppMesh mTLS exam…
Browse files Browse the repository at this point in the history
…ple (aws-ia#539)

Co-authored-by: Elango Sundararajan <[email protected]>
Co-authored-by: Bryant Biggs <[email protected]>
  • Loading branch information
3 people authored and allamand committed Dec 15, 2022
1 parent ab80c2c commit 4d283bc
Show file tree
Hide file tree
Showing 15 changed files with 611 additions and 1 deletion.
1 change: 1 addition & 0 deletions .github/workflows/plan-examples.py
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,7 @@ def get_examples():
returning a string formatted json array of the example directories minus those that are excluded
"""
exclude = {
'examples/appmesh-mtls', # excluded until Rout53 is setup
'examples/eks-cluster-with-external-dns', # excluded until Rout53 is setup
'examples/ci-cd/gitlab-ci-cd', # excluded since GitLab auth, backend, etc. required
'examples/fully-private-eks-cluster/vpc', # skipping until issue #711 is addressed
Expand Down
61 changes: 61 additions & 0 deletions examples/appmesh-mtls/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,61 @@
# EKS Cluster w/ AppMesh mTLS

This example shows how to provision an EKS cluster with AppMesh mTLS enabled.

## Prerequisites:

Ensure that you have the following tools installed locally:

1. [aws cli](https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2.html)
2. [kubectl](https://Kubernetes.io/docs/tasks/tools/)
3. [terraform](https://learn.hashicorp.com/tutorials/terraform/install-cli)

## Deploy

To provision this example:

```sh
terraform init
terraform apply
```

Enter `yes` at command prompt to apply


## Validate

The following command will update the `kubeconfig` on your local machine and allow you to interact with your EKS Cluster using `kubectl` to validate the deployment.

1. Run `update-kubeconfig` command:

```sh
aws eks --region <REGION> update-kubeconfig --name <CLUSTER_NAME>
```

2. List the nodes running currently

```sh
kubectl get nodes

# Output should look like below
NAME STATUS ROLES AGE VERSION
ip-10-0-30-125.us-west-2.compute.internal Ready <none> 2m19s v1.22.9-eks-810597c
```

3. List out the pods running currently:

```sh
kubectl get pods -A

# TODO
```

## Destroy

To teardown and remove the resources created in this example:

```sh
terraform destroy -target="module.eks_blueprints_kubernetes_addons" -auto-approve
terraform destroy -target="module.eks_blueprints" -auto-approve
terraform destroy -auto-approve
```
231 changes: 231 additions & 0 deletions examples/appmesh-mtls/main.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,231 @@
provider "aws" {
region = local.region
}

provider "kubernetes" {
host = module.eks_blueprints.eks_cluster_endpoint
cluster_ca_certificate = base64decode(module.eks_blueprints.eks_cluster_certificate_authority_data)
token = data.aws_eks_cluster_auth.this.token
}

provider "helm" {
kubernetes {
host = module.eks_blueprints.eks_cluster_endpoint
cluster_ca_certificate = base64decode(module.eks_blueprints.eks_cluster_certificate_authority_data)
token = data.aws_eks_cluster_auth.this.token
}
}

provider "kubectl" {
apply_retry_count = 10
host = module.eks_blueprints.eks_cluster_endpoint
cluster_ca_certificate = base64decode(module.eks_blueprints.eks_cluster_certificate_authority_data)
load_config_file = false
token = data.aws_eks_cluster_auth.this.token
}

data "aws_eks_cluster_auth" "this" {
name = module.eks_blueprints.eks_cluster_id
}

data "aws_availability_zones" "available" {}

locals {
name = basename(path.cwd)
region = "us-west-2"

vpc_cidr = "10.0.0.0/16"
azs = slice(data.aws_availability_zones.available.names, 0, 3)


tags = {
Blueprint = local.name
GithubRepo = "github.com/aws-ia/terraform-aws-eks-blueprints"
}
}

#---------------------------------------------------------------
# EKS Blueprints
#---------------------------------------------------------------

module "eks_blueprints" {
source = "../.."

cluster_name = local.name
cluster_version = "1.23"

vpc_id = module.vpc.vpc_id
private_subnet_ids = module.vpc.private_subnets

managed_node_groups = {
this = {
node_group_name = local.name
instance_types = ["m5.large"]
subnet_ids = module.vpc.private_subnets

min_size = 1
max_size = 2
desired_size = 1

update_config = [{
max_unavailable_percentage = 30
}]
}
}

tags = local.tags
}

module "eks_blueprints_kubernetes_addons" {
source = "../../modules/kubernetes-addons"

eks_cluster_id = module.eks_blueprints.eks_cluster_id
eks_cluster_endpoint = module.eks_blueprints.eks_cluster_endpoint
eks_oidc_provider = module.eks_blueprints.oidc_provider
eks_cluster_version = module.eks_blueprints.eks_cluster_version
eks_cluster_domain = var.eks_cluster_domain

enable_amazon_eks_vpc_cni = true
enable_amazon_eks_coredns = true
enable_amazon_eks_kube_proxy = true

aws_privateca_acmca_arn = aws_acmpca_certificate_authority.example.arn
enable_appmesh_controller = true
enable_cert_manager = true
enable_aws_privateca_issuer = true

tags = local.tags
}

#---------------------------------------------------------------
# Certificate Resources
#---------------------------------------------------------------

resource "aws_acmpca_certificate_authority" "example" {
type = "ROOT"

certificate_authority_configuration {
key_algorithm = "RSA_4096"
signing_algorithm = "SHA512WITHRSA"

subject {
common_name = "example.com"
}
}
}

resource "aws_acmpca_certificate" "example" {
certificate_authority_arn = aws_acmpca_certificate_authority.example.arn
certificate_signing_request = aws_acmpca_certificate_authority.example.certificate_signing_request
signing_algorithm = "SHA512WITHRSA"

template_arn = "arn:aws:acm-pca:::template/RootCACertificate/V1"

validity {
type = "YEARS"
value = 10
}
}

resource "aws_acmpca_certificate_authority_certificate" "example" {
certificate_authority_arn = aws_acmpca_certificate_authority.example.arn

certificate = aws_acmpca_certificate.example.certificate
certificate_chain = aws_acmpca_certificate.example.certificate_chain
}

# This resource creates a CRD of AWSPCAClusterIssuer Kind, which then represents the ACM PCA in K8
resource "kubectl_manifest" "cluster_pca_issuer" {
yaml_body = yamlencode({
apiVersion = "awspca.cert-manager.io/v1beta1"
kind = "AWSPCAClusterIssuer"

metadata = {
name = module.eks_blueprints.eks_cluster_id
}

spec = {
arn = aws_acmpca_certificate_authority.example.arn
region : local.region
}
})
}

# This resource creates a CRD of Certificate Kind, which then represents certificate issued from ACM PCA,
# mounted as K8 secret
resource "kubectl_manifest" "example_pca_certificate" {
yaml_body = yamlencode({
apiVersion = "cert-manager.io/v1"
kind = "Certificate"

metadata = {
name = var.certificate_name
namespace = "default"
}

spec = {
commonName = var.certificate_dns
duration = "2160h0m0s"
issuerRef = {
group = "awspca.cert-manager.io"
kind = "AWSPCAClusterIssuer"
name : module.eks_blueprints.eks_cluster_id
}
renewBefore = "360h0m0s"
# This is the name with which the K8 Secret will be available
secretName = "${var.certificate_name}-clusterissuer"
usages = [
"server auth",
"client auth"
]
privateKey = {
algorithm : "RSA"
size : 2048
}
}
})

depends_on = [
kubectl_manifest.cluster_pca_issuer,
]
}

#---------------------------------------------------------------
# Supporting Resources
#---------------------------------------------------------------

module "vpc" {
source = "terraform-aws-modules/vpc/aws"
version = "~> 3.0"

name = local.name
cidr = local.vpc_cidr

azs = local.azs
public_subnets = [for k, v in local.azs : cidrsubnet(local.vpc_cidr, 8, k)]
private_subnets = [for k, v in local.azs : cidrsubnet(local.vpc_cidr, 8, k + 10)]

enable_nat_gateway = true
single_nat_gateway = true
enable_dns_hostnames = true

# Manage so we can name
manage_default_network_acl = true
default_network_acl_tags = { Name = "${local.name}-default" }
manage_default_route_table = true
default_route_table_tags = { Name = "${local.name}-default" }
manage_default_security_group = true
default_security_group_tags = { Name = "${local.name}-default" }

public_subnet_tags = {
"kubernetes.io/cluster/${local.name}" = "shared"
"kubernetes.io/role/elb" = 1
}

private_subnet_tags = {
"kubernetes.io/cluster/${local.name}" = "shared"
"kubernetes.io/role/internal-elb" = 1
}

tags = local.tags
}
4 changes: 4 additions & 0 deletions examples/appmesh-mtls/outputs.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
output "configure_kubectl" {
description = "Configure kubectl: make sure you're logged in with the correct AWS profile and run the following command to update your kubeconfig"
value = module.eks_blueprints.configure_kubectl
}
17 changes: 17 additions & 0 deletions examples/appmesh-mtls/variables.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
variable "eks_cluster_domain" {
description = "Route53 domain for the cluster"
type = string
default = "example.com"
}

variable "certificate_name" {
description = "name for the certificate"
type = string
default = "example"
}

variable "certificate_dns" {
description = "CommonName used in the Certificate, usually DNS"
type = string
default = "example.com"
}
29 changes: 29 additions & 0 deletions examples/appmesh-mtls/versions.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,29 @@
terraform {
required_version = ">= 1.0.0"

required_providers {
aws = {
source = "hashicorp/aws"
version = ">= 3.72"
}
kubernetes = {
source = "hashicorp/kubernetes"
version = ">= 2.10"
}
helm = {
source = "hashicorp/helm"
version = ">= 2.4.1"
}
kubectl = {
source = "gavinbunney/kubectl"
version = ">= 1.14"
}
}

# ## Used for end-to-end testing on project; update to suit your needs
# backend "s3" {
# bucket = "terraform-ssp-github-actions-state"
# region = "us-west-2"
# key = "e2e/appmesh-mtls/terraform.tfstate"
# }
}
2 changes: 1 addition & 1 deletion examples/karpenter/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -111,7 +111,7 @@ After few times you should see 2 new nodes (one created by each provisioner)
# Output should look like below
NAME STATUS ROLES AGE VERSION PROVISIONER-NAME
ip-10-0-10-14.us-west-2.compute.internal Ready <none> 11m v1.22.9-eks-810597c default
ip-10-0-11-16.us-west-2.compute.internal Ready <none> 70m v1.22.9-eks-810597c
ip-10-0-11-16.us-west-2.compute.internal Ready <none> 70m v1.22.9-eks-810597c
ip-10-0-12-138.us-west-2.compute.internal Ready <none> 4m57s v1.22.9-eks-810597c default-lt

We now have :
Expand Down
4 changes: 4 additions & 0 deletions modules/kubernetes-addons/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,6 +27,7 @@
| <a name="module_agones"></a> [agones](#module\_agones) | ./agones | n/a |
| <a name="module_airflow"></a> [airflow](#module\_airflow) | ./airflow | n/a |
| <a name="module_app_2048"></a> [app\_2048](#module\_app\_2048) | ./app-2048 | n/a |
| <a name="module_appmesh_controller"></a> [appmesh\_controller](#module\_appmesh\_controller) | ./appmesh-controller | n/a |
| <a name="module_argo_rollouts"></a> [argo\_rollouts](#module\_argo\_rollouts) | ./argo-rollouts | n/a |
| <a name="module_argocd"></a> [argocd](#module\_argocd) | ./argocd | n/a |
| <a name="module_aws_cloudwatch_metrics"></a> [aws\_cloudwatch\_metrics](#module\_aws\_cloudwatch\_metrics) | ./aws-cloudwatch-metrics | n/a |
Expand Down Expand Up @@ -111,6 +112,8 @@
| <a name="input_amazon_eks_vpc_cni_config"></a> [amazon\_eks\_vpc\_cni\_config](#input\_amazon\_eks\_vpc\_cni\_config) | ConfigMap of Amazon EKS VPC CNI add-on | `any` | `{}` | no |
| <a name="input_amazon_prometheus_workspace_endpoint"></a> [amazon\_prometheus\_workspace\_endpoint](#input\_amazon\_prometheus\_workspace\_endpoint) | AWS Managed Prometheus WorkSpace Endpoint | `string` | `null` | no |
| <a name="input_amazon_prometheus_workspace_region"></a> [amazon\_prometheus\_workspace\_region](#input\_amazon\_prometheus\_workspace\_region) | AWS Managed Prometheus WorkSpace Region | `string` | `null` | no |
| <a name="input_appmesh_helm_config"></a> [appmesh\_helm\_config](#input\_appmesh\_helm\_config) | AppMesh Helm Chart config | `any` | `{}` | no |
| <a name="input_appmesh_irsa_policies"></a> [appmesh\_irsa\_policies](#input\_appmesh\_irsa\_policies) | Additional IAM policies for a IAM role for service accounts | `list(string)` | `[]` | no |
| <a name="input_argo_rollouts_helm_config"></a> [argo\_rollouts\_helm\_config](#input\_argo\_rollouts\_helm\_config) | Argo Rollouts Helm Chart config | `any` | `null` | no |
| <a name="input_argocd_applications"></a> [argocd\_applications](#input\_argocd\_applications) | Argo CD Applications config to bootstrap the cluster | `any` | `{}` | no |
| <a name="input_argocd_helm_config"></a> [argocd\_helm\_config](#input\_argocd\_helm\_config) | Argo CD Kubernetes add-on config | `any` | `{}` | no |
Expand Down Expand Up @@ -173,6 +176,7 @@
| <a name="input_enable_amazon_eks_vpc_cni"></a> [enable\_amazon\_eks\_vpc\_cni](#input\_enable\_amazon\_eks\_vpc\_cni) | Enable VPC CNI add-on | `bool` | `false` | no |
| <a name="input_enable_amazon_prometheus"></a> [enable\_amazon\_prometheus](#input\_enable\_amazon\_prometheus) | Enable AWS Managed Prometheus service | `bool` | `false` | no |
| <a name="input_enable_app_2048"></a> [enable\_app\_2048](#input\_enable\_app\_2048) | Enable sample app 2048 | `bool` | `false` | no |
| <a name="input_enable_appmesh_controller"></a> [enable\_appmesh\_controller](#input\_enable\_appmesh\_controller) | Enable AppMesh add-on | `bool` | `false` | no |
| <a name="input_enable_argo_rollouts"></a> [enable\_argo\_rollouts](#input\_enable\_argo\_rollouts) | Enable Argo Rollouts add-on | `bool` | `false` | no |
| <a name="input_enable_argocd"></a> [enable\_argocd](#input\_enable\_argocd) | Enable Argo CD Kubernetes add-on | `bool` | `false` | no |
| <a name="input_enable_aws_cloudwatch_metrics"></a> [enable\_aws\_cloudwatch\_metrics](#input\_enable\_aws\_cloudwatch\_metrics) | Enable AWS CloudWatch Metrics add-on for Container Insights | `bool` | `false` | no |
Expand Down
Loading

0 comments on commit 4d283bc

Please sign in to comment.