Skip to content

Commit

Permalink
Merge pull request #2229 from Jeffwan/fix-1.14
Browse files Browse the repository at this point in the history
CA 1.14: cherry pick of #1970, #1864 and #1861
  • Loading branch information
k8s-ci-robot authored Aug 5, 2019
2 parents 22ab91f + 279b7cf commit 755b8a6
Show file tree
Hide file tree
Showing 6 changed files with 135 additions and 9 deletions.
2 changes: 1 addition & 1 deletion cluster-autoscaler/cloudprovider/aws/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -178,7 +178,7 @@ spec:
See CloudFormation example [here](MixedInstancePolicy.md).
## Common Notes and Gotchas:
- The `/etc/ssl/certs/ca-certificates.crt` should exist by default on your ec2 instance. If you use Amazon Linux 2 (EKS worker node AMI by default), use `/etc/kubernetes/pki/ca.crt` instead for the volume hostPath in your cluster autoscaler manifest.
- The `/etc/ssl/certs/ca-bundle.crt` should exist by default on ec2 instance in your EKS cluster. If you use other cluster privision tools like [kops](https://github.com/kubernetes/kops) with different operating systems other than Amazon Linux 2, please use `/etc/ssl/certs/ca-certificates.crt` or correct path on your host instead for the volume hostPath in your cluster autoscaler manifest.
- Cluster autoscaler does not support Auto Scaling Groups which span multiple Availability Zones; instead you should use an Auto Scaling Group for each Availability Zone and enable the [--balance-similar-node-groups](../../FAQ.md#im-running-cluster-with-nodes-in-multiple-zones-for-ha-purposes-is-that-supported-by-cluster-autoscaler) feature. If you do use a single Auto Scaling Group that spans multiple Availability Zones you will find that AWS unexpectedly terminates nodes without them being drained because of the [rebalancing feature](https://docs.aws.amazon.com/autoscaling/ec2/userguide/auto-scaling-benefits.html#arch-AutoScalingMultiAZ).
- EBS volumes cannot span multiple AWS Availability Zones. If you have a Pod with Persistent Volume in an AZ, It must be running on a k8s/EKS node which is in the same Availability Zone of the Persistent Volume. If AWS Auto Scaling Group launches a new k8s/EKS node in different AZ and moves this Pod into the new node, The Persistent volume in previous AZ will not be available from the new AZ. The pod will stay in Pending status. The Workaround is using a single AZ for the k8s/EKS nodes.
- By default, cluster autoscaler will not terminate nodes running pods in the kube-system namespace. You can override this default behaviour by passing in the `--skip-nodes-with-system-pods=false` flag.
Expand Down
114 changes: 114 additions & 0 deletions cluster-autoscaler/cloudprovider/aws/ec2_instance_types.go
Original file line number Diff line number Diff line change
Expand Up @@ -789,6 +789,42 @@ var InstanceTypes = map[string]*instanceType{
MemoryMb: 16384,
GPU: 0,
},
"m5ad.12xlarge": {
InstanceType: "m5ad.12xlarge",
VCPU: 48,
MemoryMb: 196608,
GPU: 0,
},
"m5ad.24xlarge": {
InstanceType: "m5ad.24xlarge",
VCPU: 96,
MemoryMb: 393216,
GPU: 0,
},
"m5ad.2xlarge": {
InstanceType: "m5ad.2xlarge",
VCPU: 8,
MemoryMb: 32768,
GPU: 0,
},
"m5ad.4xlarge": {
InstanceType: "m5ad.4xlarge",
VCPU: 16,
MemoryMb: 65536,
GPU: 0,
},
"m5ad.large": {
InstanceType: "m5ad.large",
VCPU: 2,
MemoryMb: 8192,
GPU: 0,
},
"m5ad.xlarge": {
InstanceType: "m5ad.xlarge",
VCPU: 4,
MemoryMb: 16384,
GPU: 0,
},
"m5d": {
InstanceType: "m5d",
VCPU: 96,
Expand Down Expand Up @@ -1095,6 +1131,42 @@ var InstanceTypes = map[string]*instanceType{
MemoryMb: 32768,
GPU: 0,
},
"r5ad.12xlarge": {
InstanceType: "r5ad.12xlarge",
VCPU: 48,
MemoryMb: 393216,
GPU: 0,
},
"r5ad.24xlarge": {
InstanceType: "r5ad.24xlarge",
VCPU: 96,
MemoryMb: 786432,
GPU: 0,
},
"r5ad.2xlarge": {
InstanceType: "r5ad.2xlarge",
VCPU: 8,
MemoryMb: 65536,
GPU: 0,
},
"r5ad.4xlarge": {
InstanceType: "r5ad.4xlarge",
VCPU: 16,
MemoryMb: 131072,
GPU: 0,
},
"r5ad.large": {
InstanceType: "r5ad.large",
VCPU: 2,
MemoryMb: 16384,
GPU: 0,
},
"r5ad.xlarge": {
InstanceType: "r5ad.xlarge",
VCPU: 4,
MemoryMb: 32768,
GPU: 0,
},
"r5d": {
InstanceType: "r5d",
VCPU: 96,
Expand Down Expand Up @@ -1245,6 +1317,48 @@ var InstanceTypes = map[string]*instanceType{
MemoryMb: 16384,
GPU: 0,
},
"t3a.2xlarge": {
InstanceType: "t3a.2xlarge",
VCPU: 8,
MemoryMb: 32768,
GPU: 0,
},
"t3a.large": {
InstanceType: "t3a.large",
VCPU: 2,
MemoryMb: 8192,
GPU: 0,
},
"t3a.medium": {
InstanceType: "t3a.medium",
VCPU: 2,
MemoryMb: 4096,
GPU: 0,
},
"t3a.micro": {
InstanceType: "t3a.micro",
VCPU: 2,
MemoryMb: 1024,
GPU: 0,
},
"t3a.nano": {
InstanceType: "t3a.nano",
VCPU: 2,
MemoryMb: 512,
GPU: 0,
},
"t3a.small": {
InstanceType: "t3a.small",
VCPU: 2,
MemoryMb: 2048,
GPU: 0,
},
"t3a.xlarge": {
InstanceType: "t3a.xlarge",
VCPU: 4,
MemoryMb: 16384,
GPU: 0,
},
"u-12tb1": {
InstanceType: "u-12tb1",
VCPU: 448,
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -47,6 +47,9 @@ rules:
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["watch","list","get"]
- apiGroups: ["batch", "extensions"]
resources: ["jobs"]
verbs: ["get", "list", "watch", "patch"]

---
apiVersion: rbac.authorization.k8s.io/v1
Expand Down Expand Up @@ -121,7 +124,7 @@ spec:
spec:
serviceAccountName: cluster-autoscaler
containers:
- image: k8s.gcr.io/cluster-autoscaler:v1.14.3
- image: k8s.gcr.io/cluster-autoscaler:v1.14.4
name: cluster-autoscaler
resources:
limits:
Expand All @@ -146,4 +149,4 @@ spec:
volumes:
- name: ssl-certs
hostPath:
path: "/etc/ssl/certs/ca-certificates.crt"
path: "/etc/ssl/certs/ca-bundle.crt"
Original file line number Diff line number Diff line change
Expand Up @@ -47,6 +47,9 @@ rules:
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["watch","list","get"]
- apiGroups: ["batch", "extensions"]
resources: ["jobs"]
verbs: ["get", "list", "watch", "patch"]

---
apiVersion: rbac.authorization.k8s.io/v1
Expand Down Expand Up @@ -121,7 +124,7 @@ spec:
spec:
serviceAccountName: cluster-autoscaler
containers:
- image: k8s.gcr.io/cluster-autoscaler:v1.14.3
- image: k8s.gcr.io/cluster-autoscaler:v1.14.4
name: cluster-autoscaler
resources:
limits:
Expand All @@ -147,4 +150,4 @@ spec:
volumes:
- name: ssl-certs
hostPath:
path: "/etc/ssl/certs/ca-certificates.crt"
path: "/etc/ssl/certs/ca-bundle.crt"
Original file line number Diff line number Diff line change
Expand Up @@ -47,6 +47,9 @@ rules:
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["watch","list","get"]
- apiGroups: ["batch", "extensions"]
resources: ["jobs"]
verbs: ["get", "list", "watch", "patch"]

---
apiVersion: rbac.authorization.k8s.io/v1
Expand Down Expand Up @@ -121,7 +124,7 @@ spec:
spec:
serviceAccountName: cluster-autoscaler
containers:
- image: k8s.gcr.io/cluster-autoscaler:v1.14.3
- image: k8s.gcr.io/cluster-autoscaler:v1.14.4
name: cluster-autoscaler
resources:
limits:
Expand All @@ -145,4 +148,4 @@ spec:
volumes:
- name: ssl-certs
hostPath:
path: "/etc/ssl/certs/ca-certificates.crt"
path: "/etc/ssl/certs/ca-bundle.crt"
Original file line number Diff line number Diff line change
Expand Up @@ -47,6 +47,9 @@ rules:
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["watch","list","get"]
- apiGroups: ["batch", "extensions"]
resources: ["jobs"]
verbs: ["get", "list", "watch", "patch"]

---
apiVersion: rbac.authorization.k8s.io/v1
Expand Down Expand Up @@ -128,7 +131,7 @@ spec:
nodeSelector:
kubernetes.io/role: master
containers:
- image: k8s.gcr.io/cluster-autoscaler:v1.14.3
- image: k8s.gcr.io/cluster-autoscaler:v1.14.4
name: cluster-autoscaler
resources:
limits:
Expand All @@ -152,4 +155,4 @@ spec:
volumes:
- name: ssl-certs
hostPath:
path: "/etc/ssl/certs/ca-certificates.crt"
path: "/etc/ssl/certs/ca-bundle.crt"

0 comments on commit 755b8a6

Please sign in to comment.