diff --git a/.github/images/security_groups.svg b/.github/images/security_groups.svg new file mode 100644 index 00000000000..6b120e98ba9 --- /dev/null +++ b/.github/images/security_groups.svg @@ -0,0 +1 @@ + diff --git a/README.md b/README.md index 6a3afc1ace4..06f56888e13 100644 --- a/README.md +++ b/README.md @@ -168,6 +168,55 @@ module "eks" { } ``` +## Module Design Considerations + +### General Notes + +While the module is designed to be flexible and support as many use cases and configurations as possible, there is a limit to what first class support can be provided without over-burdening the complexity of the module. Below are a list of general notes on the design intent captured by this module which hopefully explains some of the decisions that are, or will be made, in terms of what is added/supported natively by the module: + +- Despite the addition of Windows Subsystem for Linux (WSL for short), containerization technology is very much a suite of Linux constrcuts and therefore Linux is the primary OS supported by this module. In addition, due to the first class support provided by AWS, Bottlerocket OS and Fargate Profiles are also very much natively supported by this module. This module does not make any attempt to NOT support Windows, as in preventing the usage of Windows based nodes, however it is up to users to put in additional effort in order to operate Winodws based nodes when using the module. User can refere to the [AWS documentation](https://docs.aws.amazon.com/eks/latest/userguide/windows-support.html) for further details. What does this mean: + - AWS EKS Managed Node Groups default to `linux` as the `platform`, but `bottlerocket` is also supported by AWS (`windows` is not supported by AWS EKS Managed Node groups) + - AWS Self Managed Node Groups also default to `linux` and the default AMI used is the latest AMI for the selected Kubernetes version. If you wish to use a different OS or AMI then you will need to opt in to the necessary configurations to ensure the correct AMI is used in conjunction with the necessary user data to ensure the nodes are launched and joined to your cluster successfully. +- AWS EKS Managed Node groups are current the preffered route over Self Managed Node Groups for compute nodes. Both operate very similarly - both are backed by autoscaling groups and launch templates deployed and visible within your account. However, AWS EKS Managed Node groups provide a better user experience and offer a more "managed service" experience and therefore has precedence over Self Managed Node Groups. That said, there are currently inherent limitations as AWS continues to rollout additional feature support similar to the level of customization you can achieve with Self Managed Node Groups. When reqeusting added feature support for AWS EKS Managed Node groups, please ensure you have verified that the feature(s) are 1) supported by AWS and 2) supported by the Terraform AWS provider before submitting a feature request. +- Due to the plethora of tooling and different manners of configuring your cluster, cluster configuration is intentionally left out of the module in order to simplify the module for a broader user base. Previous module versions provided support for managing the aws-auth configmap via the Kubernetes Terraform provider using the now deprecated aws-iam-authenticator; these are no longer included in the module. This module strictly focuses on the infrastructure resources to provision an EKS cluster as well as any supporting AWS resources - how the internals of the cluster are configured and managed is up to users and is outside the scope of this module. There is an output attribute, `aws_auth_configmap_yaml`, that has been provided that can be useful to help bridge this transition. Please see the various examples provided where this attribute is used to ensure that self managed node groups or external node groups have their IAM roles appropriately mapped to the aws-auth configmap. How users elect to manage the aws-auth configmap is left up to their choosing. + +### User Data & Bootstrapping + +There are a multitude of different possible configurations for how module users require their user data to be configured. In order to better support the various combinations from simple, out of the box support provided by the module to full customization of the user data using a template provided by users - the user data has been abstracted out to its own module. Users can see the various methods of using and providing user data through the [user data examples](https://github.com/terraform-aws-modules/terraform-aws-eks/tree/master/examples/user_data) as well more detailed information on the design and possible configurations via the [user data module itself](https://github.com/terraform-aws-modules/terraform-aws-eks/tree/master/modules/_user_data) + +In general (tl;dr): +- AWS EKS Managed Node Groups + - `linux` platform (default) -> user data is pre-pended to the AWS provided bootstrap user data (bash/shell script) when using the AWS EKS provided AMI, otherwise users need to opt in via `enable_bootstrap_user_data` and use the module provided user data template or provide their own user data template to boostrap nodes to join the cluster + - `bottlerocket` platform -> user data is merged with the AWS provided bootstrap user data (TOML file) when using the AWS EKS provided AMI, otherwise users need to opt in via `enable_bootstrap_user_data` and use the module provided user data template or provide their own user data template to boostrap nodes to join the cluster +- Self Managed Node Groups + - `linux` platform (default) -> the user data template (bash/shell script) provided by the module is used as the default; users are able to provide their own user data template + - `bottlerocket` platform -> the user data template (TOML file) provided by the module is used as the default; users are able to provide their own user data template + - `windows` platform -> the user data template (powershell/PS1 script) provided by the module is used as the default; users are able to provide their own user data template + +Module provided default templates can be found under the [templates directory](https://github.com/terraform-aws-modules/terraform-aws-eks/tree/master/templates) + +### Security Groups + +- Cluster Security Group + - This module by default creates a cluster security group ("additional" security group when viewed from the console) in addition to the default security group created by the AWS EKS service. This "additional" security group allows users to customize inbound and outbound rules via the module as they see fit + - The default inbound/outbound rules provided by the module are derived from the [AWS minimum recommendations](https://docs.aws.amazon.com/eks/latest/userguide/sec-group-reqs.html) in addition to NTP and HTTPS public internet egress rules (without, these show up in VPC flow logs as rejects - they are used for clock sync and downloading necessary packages/updates) + - The minimum inbound/outbound rules are provided for cluster and node creation to succeed without errors, but users will most likely need to add the necessary port and protocol for node-to-node communication (this is user specific based on how nodes are configured to communicate across the cluster) + - Users have the ability to opt out of the security group creation and instead provide their own externally created security group if so desired + - The security group that is created is designed to handle the bare minimum communication necessary between the control plane and the nodes, as well as any external egress to allow the cluster to successfully launch without error + - Users also have the option to supply additional, externally created security groups to the cluster as well via the `cluster_additional_security_group_ids` variable + +- Node Group Security Group(s) + - Each node group (EKS Managed Node Group and Self Managed Node Group) by default creates its own security group. By default, this security group does not contain any additional security group rules. It is merely an "empty container" that offers users the ability to opt into any addition inbound our outbound rules as necessary + - Users also have the option to supply their own, and/or additonal, externally created security group(s) to the node group as well via the `vpc_security_group_ids` variable + +The security groups created by this module are depicted in the image shown below along with their default inbound/outbound rules: + +
+ + + +
+ ## Notes - Setting `instance_refresh_enabled = true` will recreate your worker nodes without draining them first. It is recommended to install [aws-node-termination-handler](https://github.com/aws/aws-node-termination-handler) for proper node draining. See the [instance_refresh](https://github.com/terraform-aws-modules/terraform-aws-eks/tree/master/examples/instance_refresh) example provided. @@ -305,6 +354,7 @@ Full contributing [guidelines are covered here](https://github.com/terraform-aws |------|-------------|------|---------|:--------:| | [cloudwatch\_log\_group\_kms\_key\_id](#input\_cloudwatch\_log\_group\_kms\_key\_id) | If a KMS Key ARN is set, this key will be used to encrypt the corresponding log group. Please be sure that the KMS Key has an appropriate key policy (https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/encrypt-log-data-kms.html) | `string` | `null` | no | | [cloudwatch\_log\_group\_retention\_in\_days](#input\_cloudwatch\_log\_group\_retention\_in\_days) | Number of days to retain log events. Default retention - 90 days | `number` | `90` | no | +| [cluster\_additional\_security\_group\_ids](#input\_cluster\_additional\_security\_group\_ids) | List of additional, externally created security group IDs to attach to the cluster control plane | `list(string)` | `[]` | no | | [cluster\_additional\_security\_group\_rules](#input\_cluster\_additional\_security\_group\_rules) | List of additional security group rules to add to the cluster security group created | `map(any)` | `{}` | no | | [cluster\_addons](#input\_cluster\_addons) | Map of cluster addon configurations to enable for the cluster. Addon name can be the map keys or set with `name` | `any` | `{}` | no | | [cluster\_enabled\_log\_types](#input\_cluster\_enabled\_log\_types) | A list of the desired control plane logs to enable. For more information, see Amazon EKS Control Plane Logging documentation (https://docs.aws.amazon.com/eks/latest/userguide/control-plane-logs.html) | `list(string)` |[| no | diff --git a/examples/complete/main.tf b/examples/complete/main.tf index 77448ed90af..e32aaef930e 100644 --- a/examples/complete/main.tf +++ b/examples/complete/main.tf @@ -48,9 +48,9 @@ module "eks" { # Self Managed Node Group(s) self_managed_node_group_defaults = { - launch_template_default_version = true - vpc_security_group_ids = [aws_security_group.additional.id] - iam_role_additional_policies = ["arn:aws:iam::aws:policy/AmazonSSMManagedInstanceCore"] + update_launch_template_default_version = true + vpc_security_group_ids = [aws_security_group.additional.id] + iam_role_additional_policies = ["arn:aws:iam::aws:policy/AmazonSSMManagedInstanceCore"] } self_managed_node_groups = { @@ -120,6 +120,7 @@ module "eks" { GithubRepo = "terraform-aws-eks" GithubOrg = "terraform-aws-modules" } + taints = { dedicated = { key = "dedicated" @@ -127,10 +128,11 @@ module "eks" { effect = "NO_SCHEDULE" } } - # TODO - this is throwing an error - # update_config = { - # max_unavailable_percentage = 50 # or set `max_unavailable` - # } + + update_config = { + max_unavailable_percentage = 50 # or set `max_unavailable` + } + tags = { ExtraTag = "example" } @@ -200,10 +202,10 @@ module "self_managed_node_group" { module.eks.cluster_security_group_id, ] - create_launch_template = true - launch_template_name = "separate-self-mng" - launch_template_default_version = true - instance_type = "m5.large" + create_launch_template = true + launch_template_name = "separate-self-mng" + update_launch_template_default_version = true + instance_type = "m5.large" tags = merge(local.tags, { Separate = "self-managed-node-group" }) } @@ -266,23 +268,23 @@ locals { kind = "Config" current-context = "terraform" clusters = [{ - name = "${module.eks.cluster_id}" + name = module.eks.cluster_id cluster = { - certificate-authority-data = "${module.eks.cluster_certificate_authority_data}" - server = "${module.eks.cluster_endpoint}" + certificate-authority-data = module.eks.cluster_certificate_authority_data + server = module.eks.cluster_endpoint } }] contexts = [{ name = "terraform" context = { - cluster = "${module.eks.cluster_id}" + cluster = module.eks.cluster_id user = "terraform" } }] users = [{ name = "terraform" user = { - token = "${data.aws_eks_cluster_auth.this.token}" + token = data.aws_eks_cluster_auth.this.token } }] }) diff --git a/examples/eks_managed_node_group/main.tf b/examples/eks_managed_node_group/main.tf index 9056c930d36..0b58c0dabc0 100644 --- a/examples/eks_managed_node_group/main.tf +++ b/examples/eks_managed_node_group/main.tf @@ -69,9 +69,9 @@ module "eks" { ami_type = "BOTTLEROCKET_x86_64" platform = "bottlerocket" - create_launch_template = true - launch_template_name = "bottlerocket-custom" - launch_template_default_version = true + create_launch_template = true + launch_template_name = "bottlerocket-custom" + update_launch_template_default_version = true # this will get added to what AWS provides bootstrap_extra_args = <<-EOT @@ -87,9 +87,9 @@ module "eks" { ami_id = "ami-0ff61e0bcfc81dc94" platform = "bottlerocket" - create_launch_template = true - launch_template_name = "bottlerocket-custom" - launch_template_default_version = true + create_launch_template = true + launch_template_name = "bottlerocket-custom" + update_launch_template_default_version = true # use module user data template to boostrap enable_bootstrap_user_data = true @@ -171,16 +171,15 @@ module "eks" { } ] - # TODO - this is throwing an error - # update_config = { - # max_unavailable_percentage = 50 # or set `max_unavailable` - # } + update_config = { + max_unavailable_percentage = 50 # or set `max_unavailable` + } - create_launch_template = true - launch_template_name = "eks-managed-ex" - launch_template_use_name_prefix = true - description = "EKS managed node group example launch template" - launch_template_default_version = true + create_launch_template = true + launch_template_name = "eks-managed-ex" + launch_template_use_name_prefix = true + description = "EKS managed node group example launch template" + update_launch_template_default_version = true ebs_optimized = true vpc_security_group_ids = [aws_security_group.additional.id] @@ -270,23 +269,23 @@ locals { kind = "Config" current-context = "terraform" clusters = [{ - name = "${module.eks.cluster_id}" + name = module.eks.cluster_id cluster = { - certificate-authority-data = "${module.eks.cluster_certificate_authority_data}" - server = "${module.eks.cluster_endpoint}" + certificate-authority-data = module.eks.cluster_certificate_authority_data + server = module.eks.cluster_endpoint } }] contexts = [{ name = "terraform" context = { - cluster = "${module.eks.cluster_id}" + cluster = module.eks.cluster_id user = "terraform" } }] users = [{ name = "terraform" user = { - token = "${data.aws_eks_cluster_auth.this.token}" + token = data.aws_eks_cluster_auth.this.token } }] }) diff --git a/examples/irsa_autoscale_refresh/charts.tf b/examples/irsa_autoscale_refresh/charts.tf index d997565cfd1..6a98c1a9bf9 100644 --- a/examples/irsa_autoscale_refresh/charts.tf +++ b/examples/irsa_autoscale_refresh/charts.tf @@ -52,7 +52,7 @@ resource "helm_release" "cluster_autoscaler" { } depends_on = [ - module.eks + module.eks.cluster_id ] } @@ -166,7 +166,7 @@ resource "helm_release" "aws_node_termination_handler" { } depends_on = [ - module.eks + module.eks.cluster_id ] } diff --git a/examples/irsa_autoscale_refresh/main.tf b/examples/irsa_autoscale_refresh/main.tf index 69b3a2cc3f1..687db41385f 100644 --- a/examples/irsa_autoscale_refresh/main.tf +++ b/examples/irsa_autoscale_refresh/main.tf @@ -43,10 +43,10 @@ module "eks" { max_size = 5 desired_size = 1 - instance_types = ["m5.large", "m5n.large", "m5zn.large", "m6i.large", ] - create_launch_template = true - launch_template_name = "refresh" - launch_template_default_version = true + instance_type = "m5.large" + create_launch_template = true + launch_template_name = "refresh" + update_launch_template_default_version = true instance_refresh = { strategy = "Rolling" @@ -86,23 +86,23 @@ locals { kind = "Config" current-context = "terraform" clusters = [{ - name = "${module.eks.cluster_id}" + name = module.eks.cluster_id cluster = { - certificate-authority-data = "${module.eks.cluster_certificate_authority_data}" - server = "${module.eks.cluster_endpoint}" + certificate-authority-data = module.eks.cluster_certificate_authority_data + server = module.eks.cluster_endpoint } }] contexts = [{ name = "terraform" context = { - cluster = "${module.eks.cluster_id}" + cluster = module.eks.cluster_id user = "terraform" } }] users = [{ name = "terraform" user = { - token = "${data.aws_eks_cluster_auth.this.token}" + token = data.aws_eks_cluster_auth.this.token } }] }) @@ -159,7 +159,5 @@ module "vpc" { "kubernetes.io/role/internal-elb" = 1 } - tags = merge(local.tags, - { "kubernetes.io/cluster/${local.name}" = "shared" } - ) + tags = local.tags } diff --git a/examples/self_managed_node_group/main.tf b/examples/self_managed_node_group/main.tf index 3e31feba2a0..7646d24411f 100644 --- a/examples/self_managed_node_group/main.tf +++ b/examples/self_managed_node_group/main.tf @@ -117,10 +117,9 @@ module "eks" { GithubOrg = "terraform-aws-modules" } - # TODO - this is throwing an error - # update_config = { - # max_unavailable_percentage = 50 # or set `max_unavailable` - # } + update_config = { + max_unavailable_percentage = 50 # or set `max_unavailable` + } create_launch_template = true launch_template_name = "self-managed-ex" @@ -222,23 +221,23 @@ locals { kind = "Config" current-context = "terraform" clusters = [{ - name = "${module.eks.cluster_id}" + name = module.eks.cluster_id cluster = { - certificate-authority-data = "${module.eks.cluster_certificate_authority_data}" - server = "${module.eks.cluster_endpoint}" + certificate-authority-data = module.eks.cluster_certificate_authority_data + server = module.eks.cluster_endpoint } }] contexts = [{ name = "terraform" context = { - cluster = "${module.eks.cluster_id}" + cluster = module.eks.cluster_id user = "terraform" } }] users = [{ name = "terraform" user = { - token = "${data.aws_eks_cluster_auth.this.token}" + token = data.aws_eks_cluster_auth.this.token } }] }) diff --git a/main.tf b/main.tf index 2c224afced9..7ea9c5a479b 100644 --- a/main.tf +++ b/main.tf @@ -13,7 +13,7 @@ resource "aws_eks_cluster" "this" { enabled_cluster_log_types = var.cluster_enabled_log_types vpc_config { - security_group_ids = [local.cluster_security_group_id] + security_group_ids = distinct(concat(var.cluster_additional_security_group_ids, [local.cluster_security_group_id])) subnet_ids = var.subnet_ids endpoint_private_access = var.cluster_endpoint_private_access endpoint_public_access = var.cluster_endpoint_public_access diff --git a/modules/_user_data/README.md b/modules/_user_data/README.md index 1dcba039645..80ea888cc8a 100644 --- a/modules/_user_data/README.md +++ b/modules/_user_data/README.md @@ -116,7 +116,7 @@ No modules. | [cluster\_auth\_base64](#input\_cluster\_auth\_base64) | Base64 encoded CA of associated EKS cluster | `string` | `""` | no | | [cluster\_endpoint](#input\_cluster\_endpoint) | Endpoint of associated EKS cluster | `string` | `""` | no | | [cluster\_name](#input\_cluster\_name) | Name of the EKS cluster | `string` | `""` | no | -| [create](#input\_create) | Determines whether to create EKS managed node group or not | `bool` | `true` | no | +| [create](#input\_create) | Determines whether to create user-data or not | `bool` | `true` | no | | [enable\_bootstrap\_user\_data](#input\_enable\_bootstrap\_user\_data) | Determines whether the bootstrap configurations are populated within the user data template | `bool` | `false` | no | | [is\_eks\_managed\_node\_group](#input\_is\_eks\_managed\_node\_group) | Determines whether the user data is used on nodes in an EKS managed node group. Used to determine if user data will be appended or not | `bool` | `true` | no | | [platform](#input\_platform) | Identifies if the OS platform is `bottlerocket`, `linux`, or `windows` based | `string` | `"linux"` | no | diff --git a/modules/eks-managed-node-group/README.md b/modules/eks-managed-node-group/README.md index c789c5ad7b7..a5d5c01e0d2 100644 --- a/modules/eks-managed-node-group/README.md +++ b/modules/eks-managed-node-group/README.md @@ -12,40 +12,6 @@ $ terraform plan $ terraform apply ``` -# TODO - Update Notes vvv - -Note that this example may create resources which cost money. Run `terraform destroy` when you don't need these resources. - -# User Data Configurations - -- https://github.com/aws/containers-roadmap/issues/596#issuecomment-675097667 -> An important note is that user data must in MIME multi-part archive format, -> as by default, EKS will merge the bootstrapping command required for nodes to join the -> cluster with your user data. If you use a custom AMI in your launch template, -> this merging will (__NOT__) happen and you are responsible for nodes joining the cluster. -> See [docs for more details](https://docs.aws.amazon.com/eks/latest/userguide/launch-templates.html#launch-template-user-data) - -- https://aws.amazon.com/blogs/containers/introducing-launch-template-and-custom-ami-support-in-amazon-eks-managed-node-groups/ - -a. Use EKS provided AMI which merges its user data with the user data users provide in the launch template - i. No additional user data - ii. Add additional user data -b. Use custom AMI which MUST bring its own user data that bootstraps the node - i. Bring your own user data (whole shebang) - ii. Use "default" template provided by module here and (optionally) any additional user data - -TODO - need to try these out in order and verify and document what happens with user data. - - -## From LT -This is based on the LT that EKS would create if no custom one is specified (aws ec2 describe-launch-template-versions --launch-template-id xxx) there are several more options one could set but you probably dont need to modify them you can take the default and add your custom AMI and/or custom tags -# -Trivia: AWS transparently creates a copy of your LaunchTemplate and actually uses that copy then for the node group. If you DONT use a custom AMI, - -If you use a custom AMI, you need to supply via user-data, the bootstrap script as EKS DOESNT merge its managed user-data then you can add more than the minimum code you see in the template, e.g. install SSM agent, see https://github.com/aws/containers-roadmap/issues/593#issuecomment-577181345 - # -(optionally you can use https://registry.terraform.io/providers/hashicorp/cloudinit/latest/docs/data-sources/cloudinit_config to render the script, example: https://github.com/terraform-aws-modules/terraform-aws-eks/pull/997#issuecomment-705286151) then the default user-data for bootstrapping a cluster is merged in the copy. - ## Requirements diff --git a/modules/eks-managed-node-group/main.tf b/modules/eks-managed-node-group/main.tf index f32e8e8a977..85244506dd9 100644 --- a/modules/eks-managed-node-group/main.tf +++ b/modules/eks-managed-node-group/main.tf @@ -310,7 +310,7 @@ resource "aws_eks_node_group" "this" { } dynamic "update_config" { - for_each = var.update_config + for_each = length(var.update_config) > 0 ? [var.update_config] : [] content { max_unavailable_percentage = try(update_config.value.max_unavailable_percentage, null) max_unavailable = try(update_config.value.max_unavailable, null) diff --git a/node_groups.tf b/node_groups.tf index ad47a2fa84a..12a716c3c1d 100644 --- a/node_groups.tf +++ b/node_groups.tf @@ -112,7 +112,10 @@ resource "aws_security_group" "node" { tags = merge( var.tags, - { "Name" = local.node_sg_name }, + { + "Name" = local.node_sg_name + "kubernetes.io/cluster/${var.cluster_name}" = "owned" + }, var.node_security_group_tags ) } @@ -228,7 +231,7 @@ module "eks_managed_node_group" { key_name = try(each.value.key_name, var.eks_managed_node_group_defaults.key_name, null) vpc_security_group_ids = compact(concat([try(aws_security_group.node[0].id, "")], try(each.value.vpc_security_group_ids, var.eks_managed_node_group_defaults.vpc_security_group_ids, []))) launch_template_default_version = try(each.value.launch_template_default_version, var.eks_managed_node_group_defaults.launch_template_default_version, null) - update_launch_template_default_version = try(each.value.update_launch_template_default_version, var.eks_managed_node_group_defaults.update_launch_template_default_version, null) + update_launch_template_default_version = try(each.value.update_launch_template_default_version, var.eks_managed_node_group_defaults.update_launch_template_default_version, true) disable_api_termination = try(each.value.disable_api_termination, var.eks_managed_node_group_defaults.disable_api_termination, null) kernel_id = try(each.value.kernel_id, var.eks_managed_node_group_defaults.kernel_id, null) ram_disk_id = try(each.value.ram_disk_id, var.eks_managed_node_group_defaults.ram_disk_id, null) @@ -346,7 +349,7 @@ module "self_managed_node_group" { vpc_security_group_ids = compact(concat([try(aws_security_group.node[0].id, "")], try(each.value.vpc_security_group_ids, var.self_managed_node_group_defaults.vpc_security_group_ids, []))) cluster_security_group_id = local.cluster_security_group_id launch_template_default_version = try(each.value.launch_template_default_version, var.self_managed_node_group_defaults.launch_template_default_version, null) - update_launch_template_default_version = try(each.value.update_launch_template_default_version, var.self_managed_node_group_defaults.update_launch_template_default_version, null) + update_launch_template_default_version = try(each.value.update_launch_template_default_version, var.self_managed_node_group_defaults.update_launch_template_default_version, true) disable_api_termination = try(each.value.disable_api_termination, var.self_managed_node_group_defaults.disable_api_termination, null) instance_initiated_shutdown_behavior = try(each.value.instance_initiated_shutdown_behavior, var.self_managed_node_group_defaults.instance_initiated_shutdown_behavior, null) kernel_id = try(each.value.kernel_id, var.self_managed_node_group_defaults.kernel_id, null) diff --git a/variables.tf b/variables.tf index f5df58f58ed..43b26aea901 100644 --- a/variables.tf +++ b/variables.tf @@ -32,6 +32,12 @@ variable "cluster_enabled_log_types" { default = ["audit", "api", "authenticator"] } +variable "cluster_additional_security_group_ids" { + description = "List of additional, externally created security group IDs to attach to the cluster control plane" + type = list(string) + default = [] +} + variable "subnet_ids" { description = "A list of subnet IDs where the EKS cluster (ENIs) will be provisioned along with the nodes/node groups. Node groups can be deployed within a different set of subnet IDs from within the node group configuration" type = list(string)
"audit",
"api",
"authenticator"
]