Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

docs: Fix IRSA example when deploying cluster-autoscaler from the latest kubernetes/autoscaler helm repo #1090

Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion docs/spot-instances.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ You need to install a daemonset to catch the 2 minute warning before termination
helm install stable/k8s-spot-termination-handler --namespace kube-system
```

In the following examples at least 1 worker group that uses on-demand instances is included. This worker group has an added node label that can be used in scheduling. This could be used to schedule any workload not suitable for spot instances but is important for the [cluster-autoscaler](https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler) as it might be end up unscheduled when spot instances are terminated. You can add this to the values of the [cluster-autoscaler helm chart](https://github.com/helm/charts/tree/master/stable/cluster-autoscaler):
In the following examples at least 1 worker group that uses on-demand instances is included. This worker group has an added node label that can be used in scheduling. This could be used to schedule any workload not suitable for spot instances but is important for the [cluster-autoscaler](https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler) as it might be end up unscheduled when spot instances are terminated. You can add this to the values of the [cluster-autoscaler helm chart](https://github.com/kubernetes/autoscaler/tree/master/charts/cluster-autoscaler-chart):

```yaml
nodeSelector:
Expand Down
12 changes: 6 additions & 6 deletions examples/irsa/README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# IAM Roles for Service Accounts

This example shows how to create an IAM role to be used for a Kubernetes `ServiceAccount`. It will create a policy and role to be used by the [cluster-autoscaler](https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler) using the [public Helm chart](https://github.com/helm/charts/tree/master/stable/cluster-autoscaler).
This example shows how to create an IAM role to be used for a Kubernetes `ServiceAccount`. It will create a policy and role to be used by the [cluster-autoscaler](https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler) using the [public Helm chart](https://github.com/kubernetes/autoscaler/tree/master/charts/cluster-autoscaler-chart).

The AWS documentation for IRSA is here: https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html

Expand Down Expand Up @@ -38,15 +38,15 @@ $ helm install cluster-autoscaler --namespace kube-system autoscaler/cluster-aut
Ensure the cluster-autoscaler pod is running:

```
$ kubectl --namespace=kube-system get pods -l "app.kubernetes.io/name=aws-cluster-autoscaler"
NAME READY STATUS RESTARTS AGE
cluster-autoscaler-aws-cluster-autoscaler-5545d4b97-9ztpm 1/1 Running 0 3m
$ kubectl --namespace=kube-system get pods -l "app.kubernetes.io/name=aws-cluster-autoscaler-chart"
NAME READY STATUS RESTARTS AGE
cluster-autoscaler-aws-cluster-autoscaler-chart-5545d4b97-9ztpm 1/1 Running 0 3m
```

Observe the `AWS_*` environment variables that were added to the pod automatically by EKS:

```
kubectl --namespace=kube-system get pods -l "app.kubernetes.io/name=aws-cluster-autoscaler" -o yaml | grep -A3 AWS_ROLE_ARN
kubectl --namespace=kube-system get pods -l "app.kubernetes.io/name=aws-cluster-autoscaler-chart" -o yaml | grep -A3 AWS_ROLE_ARN

- name: AWS_ROLE_ARN
value: arn:aws:iam::xxxxxxxxx:role/cluster-autoscaler
Expand All @@ -57,7 +57,7 @@ kubectl --namespace=kube-system get pods -l "app.kubernetes.io/name=aws-cluster-
Verify it is working by checking the logs, you should see that it has discovered the autoscaling group successfully:

```
kubectl --namespace=kube-system logs -l "app.kubernetes.io/name=aws-cluster-autoscaler"
kubectl --namespace=kube-system logs -l "app.kubernetes.io/name=aws-cluster-autoscaler-chart"

I0128 14:59:00.901513 1 auto_scaling_groups.go:354] Regenerating instance to ASG map for ASGs: [test-eks-irsa-worker-group-12020012814125354700000000e]
I0128 14:59:00.969875 1 auto_scaling_groups.go:138] Registering ASG test-eks-irsa-worker-group-12020012814125354700000000e
Expand Down
3 changes: 3 additions & 0 deletions examples/irsa/cluster-autoscaler-chart-values.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,10 @@ awsRegion: us-west-2
rbac:
create: true
serviceAccount:
# This value should match local.k8s_service_account_name in locals.tf
name: cluster-autoscaler-aws-cluster-autoscaler-chart
annotations:
# This value should match the ARN of the role created by module.iam_assumable_role_admin in irsa.tf
eks.amazonaws.com/role-arn: "arn:aws:iam::<ACCOUNT ID>:role/cluster-autoscaler"

autoDiscovery:
Expand Down
2 changes: 1 addition & 1 deletion examples/irsa/locals.tf
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
locals {
cluster_name = "test-eks-irsa"
k8s_service_account_namespace = "kube-system"
k8s_service_account_name = "cluster-autoscaler-aws-cluster-autoscaler"
k8s_service_account_name = "cluster-autoscaler-aws-cluster-autoscaler-chart"
}