Skip to content

Commit

Permalink
Improve AWS gotchas section and make cloud providers READMEs more dis…
Browse files Browse the repository at this point in the history
…coverable #1744

Signed-off-by: Sam Weston <[email protected]>
  • Loading branch information
cablespaghetti committed Mar 4, 2019
1 parent f897f89 commit 82162e5
Show file tree
Hide file tree
Showing 2 changed files with 9 additions and 3 deletions.
8 changes: 7 additions & 1 deletion cluster-autoscaler/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,13 @@ Cluster Autoscaler is a tool that automatically adjusts the size of the Kubernet

# FAQ/Documentation

Is available [HERE](./FAQ.md).
An FAQ is available [HERE](./FAQ.md).

You should also take a look at the notes and "gotchas" for your specific cloud provider:
* [AliCloud](./cloudprovider/alicloud/README.md)
* [Azure](./cloudprovider/azure/README.md)
* [AWS](./cloudprovider/aws/README.md)
* [BaiduCloud](./cloudprovider/baiducloud/README.md)

# Releases

Expand Down
4 changes: 2 additions & 2 deletions cluster-autoscaler/cloudprovider/aws/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -142,8 +142,8 @@ If you'd like to scale node groups from 0, an `autoscaling:DescribeLaunchConfigu
```

## Common Notes and Gotchas:
- The `/etc/ssl/certs/ca-certificates.crt` should exist by default on your ec2 instance. If you use Amazon Linux 2 (EKS woker node AMI by default), use `/etc/ssl/certs/ca-bundle.crt` instead.
- Cluster autoscaler is not zone aware (for now), so if you wish to span multiple availability zones in your autoscaling groups beware that cluster autoscaler will not evenly distribute them. For more information, see https://github.com/kubernetes/contrib/pull/1552#discussion_r75532949.
- The `/etc/ssl/certs/ca-certificates.crt` should exist by default on your ec2 instance. If you use Amazon Linux 2 (EKS worker node AMI by default), use `/etc/kubernetes/pki/ca.crt` instead for the volume hostPath of your cluster-autoscaler manifest.
- Cluster autoscaler is not zone aware (for now), so if you wish to span multiple availability zones in your autoscaling groups beware that cluster autoscaler will not evenly distribute them. This is not a problem for all use cases, as AWS will try to evenly distribute the nodes in ASG on scale up, but you should ["Suspend" the "AZRebalance" process](https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-suspend-resume-processes.html) if you want to avoid AWS unexpectedly terminating your nodes to try and keep your ASG balanced. More information on how this works under the hood in the cluster autoscaler is [available here](https://github.com/kubernetes/contrib/pull/1552#discussion_r75532949).
- By default, cluster autoscaler will not terminate nodes running pods in the kube-system namespace. You can override this default behaviour by passing in the `--skip-nodes-with-system-pods=false` flag.
- By default, cluster autoscaler will wait 10 minutes between scale down operations, you can adjust this using the `--scale-down-delay-after-add`, `--scale-down-delay-after-delete`, and `--scale-down-delay-after-failure` flag. E.g. `--scale-down-delay-after-add=5m` to decrease the scale down delay to 5 minutes after a node has been added.
- If you're running multiple ASGs, the `--expander` flag supports three options: `random`, `most-pods` and `least-waste`. `random` will expand a random ASG on scale up. `most-pods` will scale up the ASG that will schedule the most amount of pods. `least-waste` will expand the ASG that will waste the least amount of CPU/MEM resources. In the event of a tie, cluster autoscaler will fall back to `random`.

0 comments on commit 82162e5

Please sign in to comment.