Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature/ k8s-eks-v1.17 layer tested #398

Merged
merged 10 commits into from
Jun 30, 2022
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
147 changes: 147 additions & 0 deletions apps-devstg/us-east-1/k8s-eks-v1.17/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,147 @@
# AWS EKS Reference Layer

## Overview
This documentation should help you understand the different pieces that make up this
EKS cluster. With such understanding you should be able to create your copies of this
cluster that are modified to serve other goals, such as having a cluster per environment.

Terraform code to orchestrate and deploy our EKS (cluster, network, k8s resources) reference
architecture. Consider that we already have an [AWS Landing Zone](https://github.com/binbashar/le-tf-infra-aws)
deployed as baseline which allow us to create, extend and enable new components on its grounds.

## Code Organization
The EKS layer (`apps-devstg/us-east-1/k8s-eks-v1.17`) is divided into sublayers which
have clear, specific purposes.

### The "network" layer
This is where we define the VPC resources for this cluster.

### The "cluster" layer
This is used to define the cluster attributes such as node groups and kubernetes version.

### The "identities" layer
This layer defines EKS IRSA roles that are later on assumed by roles running in the cluster.

### The "k8s-components" layer
This here defines the base cluster components such as ingress controllers, certificate managers, dns managers, ci/cd components, and more.

### The "k8s-workloads" layer
This here defines the cluster workloads such as web-apps, apis, back-end microservices, etc.

### Current EKS Cluster Creation Workflows

Following the [leverage terraform workflow](https://leverage.binbash.com.ar/user-guide/ref-architecture-aws/workflow/)
The EKS layers need to be orchestrated in the following order:

1. Network
1. Edit the `network.auto.tfvars`
2. Set the toggle to `true` to enable the creation of the NAT Gateway
3. Then run `leverage tf apply`
2. Cluster
1. Since we’re deploying a private K8s cluster you’ll need to be **connected to the VPN**
2. Go to this layer and run `leverage tf apply`
3. In the output you should see the credentials you need to talk to Kubernetes API via kubectl (or other clients).

```
apps-devstg//k8s-eks-v1.17/cluster$ leverage terraform output

...
kubectl_config = apiVersion: v1
preferences: {}
kind: Config

clusters:
- cluster:
server: https://9E9E4EC03A0E83CF00A9A02F8EFC1F00.gr7.us-east-1.eks.amazonaws.com
certificate-authority-data: LS0t...S0tLQo=
name: eks_bb-apps-devstg-eks-demoapps

contexts:
- context:
cluster: eks_bb-apps-devstg-eks-demoapps
user: eks_bb-apps-devstg-eks-demoapps
name: eks_bb-apps-devstg-eks-demoapps

current-context: eks_bb-apps-devstg-eks-demoapps

users:
- name: eks_bb-apps-devstg-eks-demoapps
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
command: aws-iam-authenticator
args:
- "token"
- "-i"
- "bb-apps-devstg-eks-demoapps"
- --cache
env:
- name: AWS_CONFIG_FILE
value: $HOME/.aws/bb/config
- name: AWS_PROFILE
value: bb-apps-devstg-devops
- name: AWS_SHARED_CREDENTIALS_FILE
value: $HOME/.aws/bb/credentials

```

3. Identities
1. Go to this layer and run `leverage tf apply`

#### Setup auth and test cluster connectivity
1. Connecting to the K8s EKS cluster
2. Since we’re deploying a private K8s cluster you’ll need to be **connected to the VPN**
3. install `kubetcl` in your workstation
1. https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/#install-using-native-package-management
2. https://kubernetes.io/docs/tasks/tools/install-kubectl-macos/#install-with-homebrew-on-macos
3. 📒 NOTE: consider using `kubectl` version 1.22 or 1.23 (not latest)
4. install `iam-authenticator` in your workstation
1. https://docs.aws.amazon.com/eks/latest/userguide/install-aws-iam-authenticator.html
5. Export AWS credentials
1. `export AWS_SHARED_CREDENTIALS_FILE="~/.aws/bb/credentials"`
2. `export AWS_CONFIG_FILE="/.aws/bb/config"`
6. `k8s-eks-v1.17/cluster` layer should generate the `kubeconfig` file in the output of the apply, or by running `leverage tf output` similar to https://github.com/binbashar/le-devops-workflows/blob/master/README.md#eks-clusters-kubeconfig-file
1. Edit that file to replace $HOME with the path to your home dir
2. Place the kubeconfig in `~/.kube/bb/apps-devstg` and then use export `KUBECONFIG=~/.kube/bb/apps-devstg` to help tools like kubectl find a way to talk to the cluster (or `KUBECONFIG=~/.kube/bb/apps-devstg get pods --all-namespaces` )
3. You should be now able to run kubectl commands (https://kubernetes.io/docs/reference/kubectl/cheatsheet/)

### K8s EKS Cluster Components and Workloads deployment

1. Cluster Components (k8s-resources)
1. Go to this layer and run `leverage tf apply`
2. You can use the `apps.auto.tfvars` file to configure which components get installed
3. Important: For private repo integrations after ArgoCD was successfully installed you will need to create this secret object in the cluster. Before creating the secret you need to update it to add the private SSH key that will grant ArgoCD permission to read the repository where the application definition files can be located. Note that this manual step is only a workaround that could be automated to simplify the orchestration.
2. Workloads (k8s-workloads)
1. Go to this layer and run `leverage tf apply`

## Accessing the EKS Kubernetes resources (connectivity)
To access the Kubernetes resources using `kubectl` take into account that you need **connect
to the VPN** since all our implementations are via private endpoints (private VPC subnets).

### Connecting to ArgoCD
1. Since we’re deploying a private K8s cluster you’ll need to be connected to the VPN
2. From your web browser access to https://argocd.us-east-1.devstg.aws.binbash.com.ar/
3. Considering the current `4.5.7` version we are using the default password it's stored in a secret.
1. To obtain it, use this command: `kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d`
4. As Username, the default user is **admin**.
5. You'll see the [EmojiVoto demo app](https://github.com/binbashar/le-demo-apps/tree/master/emojivoto/argocd) deployed and accessible at https://emojivoto.devstg.aws.binbash.com.ar/

**CONSIDERATION**
When running kubectl commands you could expect to get the following warning

`/apps-devstg/us-east-1/k8s-eks/cluster$ kubectl get pods -n argocd -l app.kubernetes.io/name=argocd-server -o name | cut -d'/' -f 2`

```
Cache file /home/user/.kube/cache/aws-iam-authenticator/credentials.yaml does not exist.
No cached credential available. Refreshing...
Unable to cache credential: ProviderNotExpirer: provider SharedConfigCredentials: /home/user/.aws/bb/credentials does not support ExpiresAt()
```

about aws-iam-authenticator `not finding an “expiresat” entry in this file /home/user/.aws/bb/credentials`

**UPDATE on the kubectl/aws-iam-authenticator warning:**

it seems to be related to this https://github.com/kubernetes-sigs/aws-iam-authenticator/issues/219
basically kubectl delegates on aws-iam-authenticator to retrieve the token it needs to talk to the k8s API but aws-iam-auth fails to provide that in the format that is expected by kubectl , given that is using an SSO flow it’s missing the ExpiresAt field.

In other words, using the old AWS IAM flow, aws-iam-auth is able to comply because that flow does include an expiration value besides the temporary credentials; but the SSO flow doesn’t include the expiration value for the temporary credentials as such expiration exists at the SSO token level, not at temporary credentials level (which are obtained through said token)
2 changes: 1 addition & 1 deletion apps-devstg/us-east-1/k8s-eks-v1.17/cluster/config.tf
Original file line number Diff line number Diff line change
Expand Up @@ -46,7 +46,7 @@ data "terraform_remote_state" "eks-vpc" {
region = var.region
profile = var.profile
bucket = var.bucket
key = "apps-devstg/k8s-eks/network/terraform.tfstate"
key = "apps-devstg/k8s-eks-v1.17/network/terraform.tfstate"
}
}

Expand Down
9 changes: 9 additions & 0 deletions apps-devstg/us-east-1/k8s-eks-v1.17/cluster/locals.tf
Original file line number Diff line number Diff line change
Expand Up @@ -49,5 +49,14 @@ locals {
groups = [
"system:masters"]
},
#
# Allow DevOps SSO role to become cluster admins
#
{
rolearn = "arn:aws:iam::${var.appsdevstg_account_id}:role/aws-reserved/sso.amazonaws.com/AWSReservedSSO_DevOps_5e0501636a32f9c4"
username = "DevOps"
groups = [
"system:masters"]
},
]
}
2 changes: 1 addition & 1 deletion apps-devstg/us-east-1/k8s-eks-v1.17/identities/config.tf
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@ data "terraform_remote_state" "apps-devstg-eks-cluster" {
region = var.region
profile = var.profile
bucket = var.bucket
key = "${var.environment}/k8s-eks/cluster/terraform.tfstate"
key = "${var.environment}/k8s-eks-v1.17/cluster/terraform.tfstate"
}
}

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -99,7 +99,6 @@ resource "aws_iam_policy" "externaldns_binbash_com_ar" {
EOF
}


#
# Cluster Autoscaler
#
Expand Down
1 change: 0 additions & 1 deletion apps-devstg/us-east-1/k8s-eks-v1.17/identities/roles.tf
Original file line number Diff line number Diff line change
Expand Up @@ -79,7 +79,6 @@ module "role_externaldns_public" {
}
}


#
# Role: Cluster Autoscaler
#
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -46,8 +46,9 @@ enable_prometheus_dependencies = false
#------------------------------------------------------------------------------
# Kubernetes Dashboard
#------------------------------------------------------------------------------
enable_kubernetes_dashboard = false
kubernetes_dashboard_hosts = "kubernetes-dashboard.us-east-1.devstg.aws.binbash.com.ar"
enable_kubernetes_dashboard = false
kubernetes_dashboard_ingress_class = "ingress-nginx-private"
kubernetes_dashboard_hosts = "kubernetes-dashboard.us-east-1.devstg.aws.binbash.com.ar"

#------------------------------------------------------------------------------
# CICD | ArgoCD
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -62,6 +62,6 @@ data "terraform_remote_state" "eks-cluster" {
region = var.region
profile = var.profile
bucket = var.bucket
key = "apps-devstg/k8s-eks/cluster/terraform.tfstate"
key = "apps-devstg/k8s-eks-v1.17/cluster/terraform.tfstate"
}
}
Original file line number Diff line number Diff line change
Expand Up @@ -62,6 +62,6 @@ data "terraform_remote_state" "eks-cluster" {
region = var.region
profile = var.profile
bucket = var.bucket
key = "apps-devstg/k8s-eks/cluster/terraform.tfstate"
key = "apps-devstg/k8s-eks-v1.17/cluster/terraform.tfstate"
}
}
10 changes: 9 additions & 1 deletion apps-devstg/us-east-1/k8s-eks-v1.17/network/locals.tf
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
locals {
cluster_name = "${var.project}-${var.environment}-eks-primary"
cluster_name = "${var.project}-${var.environment}-eks-v117-1ry"

# Network Local Vars
# https://docs.aws.amazon.com/eks/latest/userguide/network_reqs.html
Expand Down Expand Up @@ -109,6 +109,14 @@ locals {
protocol = "udp"
cidr_block = "0.0.0.0/0"
},
{
rule_number = 940 # HCP Vault HVN vpc
rule_action = "allow"
from_port = 0
to_port = 65535
protocol = "all"
cidr_block = var.vpc_vault_hvn_cidr
},
]

#
Expand Down
2 changes: 2 additions & 0 deletions apps-devstg/us-east-1/k8s-eks-v1.17/network/network.tf
Original file line number Diff line number Diff line change
Expand Up @@ -55,6 +55,8 @@ locals {
}
)
}

# VPC Endpoints
module "vpc_endpoints" {
source = "github.com/binbashar/terraform-aws-vpc.git//modules/vpc-endpoints?ref=v3.11.0"

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ resource "aws_vpc_peering_connection_accepter" "with_vault_hvn" {
# Update Route Tables to go through the VPC Peering Connection
# ---
# Both private and public subnets traffic will be routed and permitted through VPC Peerings (filtered by Private Inbound NACLs)
# If stryctly needed private subnets must be exposed via Load Balancers (NLBs || ALBs)
# If strictly needed private subnets must be exposed via Load Balancers (NLBs || ALBs)
# reducing public IPs exposure whenever possible.
# read more: https://github.com/binbashar/le-tf-infra-aws/issues/49
#
Expand Down

This file was deleted.

Original file line number Diff line number Diff line change
Expand Up @@ -2,9 +2,8 @@
# Providers
#
provider "aws" {
region = var.region_secondary
profile = var.profile
shared_credentials_file = "~/.aws/${var.project}/config"
region = var.region
profile = var.profile
}

provider "kubernetes" {
Expand All @@ -17,15 +16,15 @@ provider "kubernetes" {
# Backend Config (partial)
#
terraform {
required_version = ">= 0.12.28"
required_version = "~> 1.1.3"

required_providers {
aws = "~> 3.28"
kubernetes = "~> 2.0.2"
aws = "~> 4.10"
kubernetes = "~> 2.10"
}

backend "s3" {
key = "apps-devstg/k8s-eks-dr/cluster/terraform.tfstate"
key = "apps-devstg/k8s-eks-v1.17-dr/cluster/terraform.tfstate"
}
}

Expand All @@ -47,7 +46,7 @@ data "terraform_remote_state" "eks-dr-vpc" {
region = var.region
profile = var.profile
bucket = var.bucket
key = "apps-devstg/k8s-eks-dr/network/terraform.tfstate"
key = "apps-devstg/k8s-eks-v1.17-dr/network/terraform.tfstate"
}
}

Expand Down
Loading