You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The worker security group created by Terraform, doesn't honour its description. It states that it is for all node groups, however it is only used if create_launch_template is set to true (its default is false 👀 ) per node group. This in turn results in a weirdness were the default behaviour of a Node/Worker is to have the AWS EKS created Cluster Security group (this is seperate to the cluster security group created by the module).
This issue becomes even more apparent when you have configured a mesh overly on your cluster where for example:
Node Group 1 has create_launch_template set to false (or unset, defaulting to false in the module)
Node Group 2 has create_launch_template set to true.
This results in the two seperate Node groups being completely unable to communicate with each other, completely breaking a mesh network 😢.
node_groups={
no_worker_sg = {
desired_capacity =1
max_capacity =1
min_capacity =1
instance_types = ["m5.large"]
capacity_type ="SPOT"
},
worker_sg = {
desired_capacity =1
max_capacity =1
min_capacity =1
instance_types = ["m5.large"]
capacity_type ="SPOT"# This being the key difference between the two node_groups
create_launch_template =true
}
}
Expected behavior
I would expect that the worker security group created by Terraform is ALWAYS attached to all of my nodes, regardless of what i pass to the module.
Actual behavior
no_worker_sg uses the AWS EKS Security group as its primary Security group on the nodes
worker_sg uses the Terraform Security Group called worker as its primary node group
This results in zero traffic being able to be passed between these two seperate node groups 😭
Terminal Output Screenshot(s)
Additional context
I am running Istio on my cluster, and using a seperate Node Group because i needed a large Node dedicated to an application. This would have been working fine if the worker security group had been attached to all node groups like the description states, but it isn't.
I could open a PR against this but it WILL result in issues around backwards compatability with the module. Because the EKS Security group is attached by default, people will have made assumptions, allowing that security group to access their Database in RDS for example, but they should be using the worker security group created by Terraform. Due to the default behaviour, AWS EKS Control plane can then access your RDS etc if these assumptions have been made 😱
The text was updated successfully, but these errors were encountered:
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.
Description
worker
Security Group:terraform-aws-eks/workers.tf
Line 344 in 781f673
The worker security group created by Terraform, doesn't honour its description. It states that it is for all node groups, however it is only used if
create_launch_template
is set totrue
(its default is false 👀 ) per node group. This in turn results in a weirdness were the default behaviour of a Node/Worker is to have the AWS EKS created Cluster Security group (this is seperate to thecluster
security group created by the module).This issue becomes even more apparent when you have configured a mesh overly on your cluster where for example:
create_launch_template
set tofalse
(or unset, defaulting to false in the module)create_launch_template
set totrue
.This results in the two seperate Node groups being completely unable to communicate with each other, completely breaking a mesh network 😢.
terraform-aws-eks/modules/node_groups/node_groups.tf
Line 47 in 781f673
Before you submit an issue, please perform the following first:
.terraform
directory (! ONLY if state is stored remotely, which hopefully you are following that best practice!):rm -rf .terraform/
terraform init
Versions
Reproduction
Steps to reproduce the behavior:
Code Snippet to Reproduce
Using https://github.com/terraform-aws-modules/terraform-aws-eks/blob/master/examples/managed_node_groups/main.tf set node_groups to the following:
Expected behavior
I would expect that the
worker
security group created by Terraform is ALWAYS attached to all of my nodes, regardless of what i pass to the module.Actual behavior
no_worker_sg
uses the AWS EKS Security group as its primary Security group on the nodesworker_sg
uses the Terraform Security Group calledworker
as its primary node groupThis results in zero traffic being able to be passed between these two seperate node groups 😭
Terminal Output Screenshot(s)
Additional context
I am running Istio on my cluster, and using a seperate Node Group because i needed a large Node dedicated to an application. This would have been working fine if the
worker
security group had been attached to all node groups like the description states, but it isn't.I could open a PR against this but it WILL result in issues around backwards compatability with the module. Because the EKS Security group is attached by default, people will have made assumptions, allowing that security group to access their Database in RDS for example, but they should be using the
worker
security group created by Terraform. Due to the default behaviour, AWS EKS Control plane can then access your RDS etc if these assumptions have been made 😱The text was updated successfully, but these errors were encountered: