-
Notifications
You must be signed in to change notification settings - Fork 205
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Issue with cdk blueprint version 1.4 when running ClusterAutoScalerAddOn on Kubernetes 1.23 #531
Comments
@shapirov103 you can close this one |
@bnaydenov how is it working on 1.22 without the policy "ec2:DescribeInstanceTypes" ? |
@softmates most most likely is due to different version of the helm chart which autoscaler uses for different version of eks kubernetes. Check this file:
eks k8s const versionMap = new Map([
[KubernetesVersion.V1_23, "9.21.0"],
[KubernetesVersion.V1_22, "9.13.1"],
[KubernetesVersion.V1_21, "9.13.1"],
[KubernetesVersion.V1_20, "9.9.2"],
[KubernetesVersion.V1_19, "9.4.0"],
[KubernetesVersion.V1_18, "9.4.0"],
]); |
Yep, it make sense. Looking at the spec don't see a need for additional policy "ec2:DescribeInstanceTypes" refer the comparison https://artifacthub.io/packages/helm/cluster-autoscaler/cluster-autoscaler/9.13.0 spec: https://artifacthub.io/packages/helm/cluster-autoscaler/cluster-autoscaler/9.21.0 spec:
|
@softmates i was inspired from this issue: kubernetes/autoscaler#3216 more specifically this comment https://github.com/kubernetes/autoscaler/issues/3216#issuecomment-1047164006 |
this changes are released in |
Describe the bug
When cdk blueprint version
1.4
is used andClusterAutoScalerAddOn
is installed oneks kubernetes 1.23
pod forblueprints-addon-cluster-autoscaler-aws-cluster-autoscaler
is fails to start.The same setup works without problems on
eks kubernetes 1.21
and1.22
Expected Behavior
When installing
ClusterAutoScalerAddOn
oneks kubernetes 1.23
pod forblueprints-addon-cluster-autoscaler-aws-cluster-autoscaler
suppose to start without errors.Current Behavior
pod
blueprints-addon-cluster-autoscaler-aws-cluster-autoscaler
starts and during starting phase which takes about 15-20 sec pod crash with following errors:The main error is
Failed to generate AWS EC2 Instance Types: UnauthorizedOperation: You are not authorized to perform this operation.
Reproduction Steps
just use
cdk blueprint 1.4
to spin up brand neweks k8s cluster 1.23
and usingClusterAutoScalerAddOn
cdk deploy
step will be successful, but after that podblueprints-addon-cluster-autoscaler-aws-cluster-autoscaler
will crash and can not be started.Possible Solution
I have found what is wrong and will prepare PR to fix this.
TLDR: We need to add missing policy
ec2:DescribeInstanceTypes
for cluster-autoscaler IAM statements here:cdk-eks-blueprints/lib/addons/cluster-autoscaler/index.ts
Line 79 in c03512b
For more info check here:
kubernetes/autoscaler#3216
particuleio/terraform-kubernetes-addons#1320
I have created local monkey patch of
addon/cluster-autoscaler
incdk-eks-blueprints/lib/addons/cluster-autoscaler/index.ts
Line 79 in c03512b
"ec2:DescribeInstanceTypes"
and everything is working as expected.Additional Information/Context
No response
CDK CLI Version
2.50.0 (build 4c11af6)
EKS Blueprints Version
1.4.0
Node.js Version
v16.17.0
Environment details (OS name and version, etc.)
Mac OS Monterey - Version 12.6
Other information
No response
The text was updated successfully, but these errors were encountered: