Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: add upstream module with k8s-addons example #698

Merged
merged 2 commits into from
Jun 28, 2022
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
87 changes: 87 additions & 0 deletions examples/upstream-with-k8s-addons/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,87 @@
# EKS Upstream module with Blueprints kubernetes addons module

Customers want to know if they can use the [upstream terraform eks module](https://github.com/terraform-aws-modules/terraform-aws-eks) and use the eks-blueprints for kubernetes-addons only, this example shows how to achieve this.

## How to Deploy

### Prerequisites

Ensure that you have installed the following tools in your Mac or Windows Laptop before start working with this module and run Terraform Plan and Apply

1. [AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2.html)
2. [Kubectl](https://Kubernetes.io/docs/tasks/tools/)
3. [Terraform](https://learn.hashicorp.com/tutorials/terraform/install-cli)

### Deployment Steps

#### Step 1: Clone the repo using the command below

```sh
git clone https://github.com/aws-ia/terraform-aws-eks-blueprints.git
```

#### Step 2: Run Terraform INIT

Initialize a working directory with configuration files

```sh
cd examples/upstream-with-k8s-addons
terraform init
```

#### Step 3: Run Terraform PLAN

Verify the resources created by this execution

```sh
terraform plan
```

Note: you can change the region (default set to us-west-2) if you wish so in the main.tf under the locals.

#### Step 4: Finally, Terraform APPLY

**Deploy the pattern**

```sh
terraform apply
```

Enter `yes` to apply.

### Configure `kubectl` and test cluster

EKS Cluster details can be extracted from terraform output or from AWS Console to get the name of cluster.
This following command used to update the `kubeconfig` in your local machine where you run kubectl commands to interact with your EKS Cluster.

#### Step 5: Run `update-kubeconfig` command

`~/.kube/config` file gets updated with cluster details and certificate from the below command

aws eks --region <enter-your-region> update-kubeconfig --name <cluster-name>

#### Step 6: List all the worker nodes by running the command below

kubectl get nodes

#### Step 7: List all the pods running in `kube-system` namespace

kubectl get pods -n kube-system

## Cleanup

To clean up your environment, destroy the Terraform modules in reverse order.

Destroy the Kubernetes Add-ons, EKS cluster with Node groups and VPC

```sh
terraform destroy -target="module.eks_blueprints_kubernetes_addons" -auto-approve
terraform destroy -target="module.eks" -auto-approve
terraform destroy -target="module.vpc" -auto-approve
```

Finally, destroy any additional resources that are not in the above modules

```sh
terraform destroy -auto-approve
```
144 changes: 144 additions & 0 deletions examples/upstream-with-k8s-addons/main.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,144 @@
provider "aws" {
region = local.region
}

provider "kubernetes" {
host = module.eks.cluster_endpoint
cluster_ca_certificate = base64decode(module.eks.cluster_certificate_authority_data)

exec {
api_version = "client.authentication.k8s.io/v1beta1"
command = "aws"
# This requires the awscli to be installed locally where Terraform is executed
args = ["eks", "get-token", "--cluster-name", module.eks.cluster_id]
}
}

provider "helm" {
kubernetes {
host = module.eks.cluster_endpoint
cluster_ca_certificate = base64decode(module.eks.cluster_certificate_authority_data)

exec {
api_version = "client.authentication.k8s.io/v1beta1"
command = "aws"
# This requires the awscli to be installed locally where Terraform is executed
args = ["eks", "get-token", "--cluster-name", module.eks.cluster_id]
}
}
}

data "aws_availability_zones" "available" {}

locals {
name = basename(path.cwd)
region = "us-west-2"

vpc_cidr = "10.0.0.0/16"
azs = slice(data.aws_availability_zones.available.names, 0, 3)

tags = {
Blueprint = local.name
GithubRepo = "github.com/aws-ia/terraform-aws-eks-blueprints"
}
}

#---------------------------------------------------------------
# EKS Cluster with terraform-aws-eks module
#---------------------------------------------------------------

module "eks" {
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Tried to keep the example as simple as possible, if you believe we can further remove more pieces here lmk.

source = "terraform-aws-modules/eks/aws"
version = "~> 18.0"

cluster_name = local.name
cluster_version = "1.22"
cluster_endpoint_private_access = true

vpc_id = module.vpc.vpc_id
subnet_ids = module.vpc.private_subnets

eks_managed_node_group_defaults = {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe this example should be an upstream module rather than a blueprint.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@vara-bonthu customers want to see how kubernetes-addons module can be used with upstream.

instance_types = ["m6i.large", "m5.large", "m5n.large", "m5zn.large"]
create_security_group = false
}

eks_managed_node_groups = {
bottlerocket = {
ami_type = "BOTTLEROCKET_x86_64"
platform = "bottlerocket"

min_size = 1
max_size = 7
desired_size = 1

update_config = {
max_unavailable_percentage = 33
}
}
}
}

#---------------------------------------------------------------
# Kubernetes Addons using Blueprints kubernetes-addons module
#---------------------------------------------------------------

module "eks_blueprints_kubernetes_addons" {
source = "../../modules/kubernetes-addons"

eks_cluster_id = module.eks.cluster_id
eks_cluster_endpoint = module.eks.cluster_endpoint
eks_oidc_provider = module.eks.oidc_provider
eks_cluster_version = module.eks.cluster_version
Comment on lines +89 to +92
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is really the most important part of this example, should we call this out in the README/add a comment here?


# EKS Managed Add-ons
enable_amazon_eks_vpc_cni = true
enable_amazon_eks_coredns = true
enable_amazon_eks_kube_proxy = true

# Add-ons
enable_metrics_server = true
enable_cluster_autoscaler = true
enable_aws_cloudwatch_metrics = true

tags = local.tags
}

#---------------------------------------------------------------
# Supporting Resources
#---------------------------------------------------------------
module "vpc" {
source = "terraform-aws-modules/vpc/aws"
version = "~> 3.0"

name = local.name
cidr = local.vpc_cidr

azs = local.azs
public_subnets = [for k, v in local.azs : cidrsubnet(local.vpc_cidr, 8, k)]
private_subnets = [for k, v in local.azs : cidrsubnet(local.vpc_cidr, 8, k + 10)]

enable_nat_gateway = true
single_nat_gateway = true
enable_dns_hostnames = true

# Manage so we can name
manage_default_network_acl = true
default_network_acl_tags = { Name = "${local.name}-default" }
manage_default_route_table = true
default_route_table_tags = { Name = "${local.name}-default" }
manage_default_security_group = true
default_security_group_tags = { Name = "${local.name}-default" }

public_subnet_tags = {
"kubernetes.io/cluster/${local.name}" = "shared"
"kubernetes.io/role/elb" = 1
}

private_subnet_tags = {
"kubernetes.io/cluster/${local.name}" = "shared"
"kubernetes.io/role/internal-elb" = 1
}

tags = local.tags
}
4 changes: 4 additions & 0 deletions examples/upstream-with-k8s-addons/outputs.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
output "configure_kubectl" {
bryantbiggs marked this conversation as resolved.
Show resolved Hide resolved
description = "Configure kubectl: make sure you're logged in with the correct AWS profile and run the following command to update your kubeconfig"
value = "aws eks --region ${local.region} update-kubeconfig --name ${module.eks.cluster_id}"
}
Empty file.
25 changes: 25 additions & 0 deletions examples/upstream-with-k8s-addons/versions.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,25 @@
terraform {
required_version = ">= 1.0.0"

required_providers {
aws = {
source = "hashicorp/aws"
version = ">= 3.72"
}
kubernetes = {
source = "hashicorp/kubernetes"
version = ">= 2.10"
}
helm = {
source = "hashicorp/helm"
version = ">= 2.4.1"
}
}

# ## Used for end-to-end testing on project; update to suit your needs
# backend "s3" {
# bucket = "terraform-ssp-github-actions-state"
# region = "us-west-2"
# key = "e2e/upstream-with-k8s-addons/terraform.tfstate"
# }
}