Skip to content

Commit

Permalink
refactor: Update teams multi-tenancy example to use new module (#1549)
Browse files Browse the repository at this point in the history
  • Loading branch information
bryantbiggs authored Apr 11, 2023
1 parent 86e8337 commit 531eb42
Show file tree
Hide file tree
Showing 8 changed files with 141 additions and 181 deletions.
2 changes: 1 addition & 1 deletion .pre-commit-config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ repos:
- id: detect-aws-credentials
args: ['--allow-missing-credentials']
- repo: https://github.com/antonbabenko/pre-commit-terraform
rev: v1.77.1
rev: v1.77.2
hooks:
- id: terraform_fmt
- id: terraform_docs
Expand Down
6 changes: 3 additions & 3 deletions examples/karpenter/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,9 +4,9 @@ This example demonstrates how to provision a Karpenter on a serverless cluster (

This example solution provides:

- AWS EKS Cluster (control plane)
- AWS EKS Fargate Profiles for the `kube-system` namespace which is used by the `coredns`, `vpc-cni`, and `kube-proxy` addons, as well as profile that will match on the `karpenter` namespace which will be used by Karpenter.
- AWS EKS managed addons `coredns`, `vpc-cni` and `kube-proxy`
- Amazon EKS Cluster (control plane)
- Amazon EKS Fargate Profiles for the `kube-system` namespace which is used by the `coredns`, `vpc-cni`, and `kube-proxy` addons, as well as profile that will match on the `karpenter` namespace which will be used by Karpenter.
- Amazon EKS managed addons `coredns`, `vpc-cni` and `kube-proxy`
`coredns` has been patched to run on Fargate, and `vpc-cni` has been configured to use prefix delegation to better support the max pods setting of 110 on the Karpenter provisioner
- A sample deployment is provided to demonstrates scaling a deployment to view how Karpenter responds to provision, and de-provision, resources on-demand

Expand Down
77 changes: 22 additions & 55 deletions examples/multi-tenancy-with-teams/README.md
Original file line number Diff line number Diff line change
@@ -1,80 +1,47 @@
# EKS Cluster with Teams to a new VPC
# Multi-Tenancy w/ Teams

This example deploys a new EKS Cluster with Teams to a new VPC.
This example demonstrates how to provision and configure a multi-tenancy Amazon EKS cluster with safeguards for resource consumption and namespace isolation.

- Creates a new sample VPC, 3 Private Subnets and 3 Public Subnets
- Creates an Internet gateway for the Public Subnets and a NAT Gateway for the Private Subnets
- Creates an EKS Cluster Control plane with public endpoint with one managed node group
- Creates two application teams - blue and red and deploys team manifests to the cluster
- Creates a single platform admin team - you will need to provide your own IAM user/role first, see the example for more details
This example solution provides:

## How to Deploy
- Amazon EKS Cluster (control plane)
- Amazon EKS managed nodegroup (data plane)
- Two development teams - `team-red` and `team-blue` - isolated to their respective namespaces
- An admin team with privileged access to the cluster (`team-admin`)

### Prerequisites:
## Prerequisites:

Ensure that you have installed the following tools in your Mac or Windows Laptop before start working with this module and run Terraform Plan and Apply
Ensure that you have the following tools installed locally:

1. [AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2.html)
2. [Kubectl](https://Kubernetes.io/docs/tasks/tools/)
3. [Terraform](https://learn.hashicorp.com/tutorials/terraform/install-cli)
1. [aws cli](https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2.html)
2. [kubectl](https://Kubernetes.io/docs/tasks/tools/)
3. [terraform](https://learn.hashicorp.com/tutorials/terraform/install-cli)

### Deployment Steps
## Deploy

#### Step 1: Clone the repo using the command below
To provision this example:

```sh
git clone https://github.com/aws-ia/terraform-aws-eks-blueprints.git
```

#### Step 2: Run `terraform init`

to initialize a working directory with configuration files

```sh
cd examples/multi-tenancy-with-teams/
terraform init
terraform apply
```

#### Step 3: Run `terraform plan`
Enter `yes` at command prompt to apply

to verify the resources created by this execution
## Validate

```sh
export AWS_REGION=<enter-your-region> # Select your own region
terraform plan
```
The following command will update the `kubeconfig` on your local machine and allow you to interact with your EKS Cluster using `kubectl`.

#### Step 4: Finally, `terraform apply`

to create resources
1. Run `update-kubeconfig` command:

```sh
terraform apply
aws eks --region <REGION> update-kubeconfig --name <CLUSTER_NAME>
```

Enter `yes` to apply

### Configure kubectl and test cluster

EKS Cluster details can be extracted from terraform output or from AWS Console to get the name of cluster. This following command used to update the `kubeconfig` in your local machine where you run kubectl commands to interact with your EKS Cluster.

#### Step 5: Run update-kubeconfig command.

`~/.kube/config` file gets updated with cluster details and certificate from the below command

$ aws eks --region <enter-your-region> update-kubeconfig --name <cluster-name>

#### Step 6: List all the worker nodes by running the command below

$ kubectl get nodes

#### Step 7: List all the pods running in kube-system namespace

$ kubectl get pods -n kube-system
## Destroy

## How to Destroy
To teardown and remove the resources created in this example:

```sh
cd examples/multi-tenancy-with-teams
terraform destroy -auto-approve
```
168 changes: 109 additions & 59 deletions examples/multi-tenancy-with-teams/main.tf
Original file line number Diff line number Diff line change
Expand Up @@ -3,29 +3,29 @@ provider "aws" {
}

provider "kubernetes" {
host = module.eks_blueprints.eks_cluster_endpoint
cluster_ca_certificate = base64decode(module.eks_blueprints.eks_cluster_certificate_authority_data)
host = module.eks.cluster_endpoint
cluster_ca_certificate = base64decode(module.eks.cluster_certificate_authority_data)
token = data.aws_eks_cluster_auth.this.token
}

provider "helm" {
kubernetes {
host = module.eks_blueprints.eks_cluster_endpoint
cluster_ca_certificate = base64decode(module.eks_blueprints.eks_cluster_certificate_authority_data)
host = module.eks.cluster_endpoint
cluster_ca_certificate = base64decode(module.eks.cluster_certificate_authority_data)
token = data.aws_eks_cluster_auth.this.token
}
}

provider "kubectl" {
apply_retry_count = 10
host = module.eks_blueprints.eks_cluster_endpoint
cluster_ca_certificate = base64decode(module.eks_blueprints.eks_cluster_certificate_authority_data)
host = module.eks.cluster_endpoint
cluster_ca_certificate = base64decode(module.eks.cluster_certificate_authority_data)
load_config_file = false
token = data.aws_eks_cluster_auth.this.token
}

data "aws_eks_cluster_auth" "this" {
name = module.eks_blueprints.eks_cluster_id
name = module.eks.cluster_name
}

data "aws_caller_identity" "current" {}
Expand All @@ -48,72 +48,122 @@ locals {
# EKS Blueprints
#---------------------------------------------------------------

module "eks_blueprints" {
source = "../.."
#tfsec:ignore:aws-eks-enable-control-plane-logging
module "eks" {
source = "terraform-aws-modules/eks/aws"
version = "~> 19.12"

cluster_name = local.name
cluster_version = "1.24"
cluster_name = local.name
cluster_version = "1.25"
cluster_endpoint_public_access = true

vpc_id = module.vpc.vpc_id
private_subnet_ids = module.vpc.private_subnets
vpc_id = module.vpc.vpc_id
subnet_ids = module.vpc.private_subnets

managed_node_groups = {
mg_5 = {
node_group_name = "managed-ondemand"
instance_types = ["m5.large"]
subnet_ids = module.vpc.private_subnets
eks_managed_node_groups = {
initial = {
instance_types = ["m5.large"]

min_size = 1
max_size = 5
desired_size = 2
}
}

platform_teams = {
admin = {
users = [data.aws_caller_identity.current.arn]
manage_aws_auth_configmap = true
aws_auth_roles = flatten([
module.eks_blueprints_admin_team.aws_auth_configmap_role,
[for team in module.eks_blueprints_dev_teams : team.aws_auth_configmap_role],
])

tags = local.tags
}

module "eks_blueprints_admin_team" {
source = "aws-ia/eks-blueprints-teams/aws"
version = "~> 0.2"

name = "admin-team"

enable_admin = true
users = [data.aws_caller_identity.current.arn]
cluster_arn = module.eks.cluster_arn

tags = local.tags
}

module "eks_blueprints_dev_teams" {
source = "aws-ia/eks-blueprints-teams/aws"
version = "~> 0.2"

for_each = {
red = {
labels = {
project = "SuperSecret"
}
}
blue = {}
}
name = "team-${each.key}"

# EKS Teams
application_teams = {
team-red = {
"labels" = {
"appName" = "read-team-app",
"projectName" = "project-red",
"environment" = "example",
"domain" = "example",
"uuid" = "example",
"billingCode" = "example",
"branch" = "example"
}
"quota" = {
"requests.cpu" = "1000m",
"requests.memory" = "4Gi",
"limits.cpu" = "2000m",
"limits.memory" = "8Gi",
"pods" = "10",
"secrets" = "10",
"services" = "10"
}
users = [data.aws_caller_identity.current.arn]
cluster_arn = module.eks.cluster_arn
oidc_provider_arn = module.eks.oidc_provider_arn

manifests_dir = "./manifests-team-red"
users = [data.aws_caller_identity.current.arn]
}
labels = merge(
{
team = each.key
},
try(each.value.labels, {})
)

annotations = {
team = each.key
}

team-blue = {
"labels" = {
"appName" = "blue-team-app",
"projectName" = "project-blue",
namespaces = {
"blue-${each.key}" = {
labels = {
appName = "blue-team-app",
projectName = "project-blue",
}
"quota" = {
"requests.cpu" = "2000m",
"requests.memory" = "4Gi",
"limits.cpu" = "4000m",
"limits.memory" = "16Gi",
"pods" = "20",
"secrets" = "20",
"services" = "20"

resource_quota = {
hard = {
"requests.cpu" = "2000m",
"requests.memory" = "4Gi",
"limits.cpu" = "4000m",
"limits.memory" = "16Gi",
"pods" = "20",
"secrets" = "20",
"services" = "20"
}
}

manifests_dir = "./manifests-team-blue"
users = [data.aws_caller_identity.current.arn]
limit_range = {
limit = [
{
type = "Pod"
max = {
cpu = "200m"
memory = "1Gi"
}
},
{
type = "PersistentVolumeClaim"
min = {
storage = "24M"
}
},
{
type = "Container"
default = {
cpu = "50m"
memory = "24Mi"
}
}
]
}
}
}

Expand Down

This file was deleted.

This file was deleted.

Loading

0 comments on commit 531eb42

Please sign in to comment.