Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Maybe a bug? #1

Conversation

Vlaaaaaaad
Copy link

Output:

terraform plan                                                        

│ Error: Invalid for_each argument
│ 
│   on .terraform/modules/eks/modules/eks-managed-node-group/main.tf line 366, in resource "aws_security_group_rule" "this":
│  366:   for_each = { for k, v in var.security_group_rules : k => v if local.create_security_group }
│     ├────────────────
│     │ local.create_security_group is true
│     │ var.security_group_rules will be known only after apply
│ 
│ The "for_each" value depends on resource attributes that cannot be determined until apply, so Terraform cannot predict how many instances will be created. To work around this, use
│ the -target argument to first apply only the resources that the for_each depends on.
╵
╷
│ Error: Invalid for_each argument
│ 
│   on .terraform/modules/eks/modules/eks-managed-node-group/main.tf line 427, in resource "aws_iam_role_policy_attachment" "this":
│  427:   for_each = var.create && var.create_iam_role ? toset(compact(distinct(concat([
│  428:     "${local.policy_arn_prefix}/AmazonEKSWorkerNodePolicy",
│  429:     "${local.policy_arn_prefix}/AmazonEC2ContainerRegistryReadOnly",
│  430:     "${local.policy_arn_prefix}/AmazonEKS_CNI_Policy",
│  431:   ], var.iam_role_additional_policies)))) : toset([])
│     ├────────────────
│     │ local.policy_arn_prefix is "arn:aws:iam::aws:policy"
│     │ var.create is true
│     │ var.create_iam_role is true
│     │ var.iam_role_additional_policies is a list of string, known only after apply
│ 
│ The "for_each" value depends on resource attributes that cannot be determined until apply, so Terraform cannot predict how many instances will be created. To work around this, use
│ the -target argument to first apply only the resources that the for_each depends on.
╵

@antonbabenko
Copy link

You are doing it all right! This is one of the most irritating bug in Terraform. We occasionally work around it in some modules with various levels of success. More info - hashicorp/terraform#4149

@daroga0002
Copy link

what terraform version you using?

@Vlaaaaaaad
Copy link
Author

Vlaaaaaaad commented Dec 15, 2021

I am on Terraform v1.1.0.

@Vlaaaaaaad
Copy link
Author

Vlaaaaaaad commented Dec 15, 2021

Welp, Terraform v1.1.1 was just released aaaand I can confirm the plan works now! Maybe due to hashicorp/terraform#30171 ?

I'll test a full apply and a more complex example tomorrow and report back!

@Vlaaaaaaad
Copy link
Author

Ahem, nevermind, I was in the wrong folder 🤦 The bug is still present and Terraform v1.1.1 does not change the situation.

@Vlaaaaaaad
Copy link
Author

Vlaaaaaaad commented Dec 16, 2021

Workaround: managing the IAM Policy attachment and Security Group rule attachment from outside the module works, if and only if I set the Managed Node Groups to have min_size = desired_size = 0 and max_size = 1 (it can't also be 0), run terraform apply, and then change the min_size and max_size to the desired values!

Having anything higher than 0 leads to failure due to circular resources/weird dependencies:

  • NodeGroups get created
  • EC2s get started up but can't join the cluster due to lacking IAM and Security Group permissions (in my case I only allowed image pulls through an ECR VPC Endpoint)
  • NodeGroups don't finish getting successfully created (as they wait for EC2s to successfully join), so the module does not output the iam_role_name or security_group_id and the extra IAM and SG rules can't be attached.

EDIT: buuut, a manual step has to be done due to ignore_lifecycle on desired_size which is initially set to 0. │ Error: error updating EKS Node Group (...) config: InvalidParameterException: Minimum capacity 1 can't be greater than desired size 0

@bryantbiggs
Copy link
Owner

hey @Vlaaaaaaad , sorry just getting around to this. Yes, the issue is what @antonbabenko linked to and its rather frustrating. the way that I generally work around this is by pre-creating the resources referenced. if trying to deploy everything anew you can do:

terraform apply --target aws_security_group.example_dynamic --target aws_iam_policy.workers_extra 
terraform apply 

or through some other means of resource creation ordering as long as the security group and IAM policy are already created before attempting to create the EKS cluster

@Vlaaaaaaad
Copy link
Author

Don't even think about the delay @bryantbiggs! Since the big re-factor is still WIP, I just wanted to share the result of my testing.
What you do with this info, is now up to you! I think maybe a note somewhere so people don't get surprised by the same issue might be worth it, but that's my non-binding review opinion 😆

@bryantbiggs
Copy link
Owner

yes, thank you for the report - its much appreciated. I think you are correct and its worth of noting down and making users aware. I am cautiously trying to avoid bloating with too much caveat documentation (hard not too when dealing with something as complex as Kubernetes and all the various ways of configuring it on EKS, etc.), but I think this is directly related to the module usage and a worthy detail to include

@Vlaaaaaaad
Copy link
Author

Closing this as the bug was ACKed and there's not much we can do about it 🤷

@Vlaaaaaaad Vlaaaaaaad closed this Dec 17, 2021
@github-actions
Copy link

github-actions bot commented Nov 8, 2022

I'm going to lock this pull request because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems related to this change, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Nov 8, 2022
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants