Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

improvement: count to for_each #1309

Conversation

sbeginCoveo
Copy link

@sbeginCoveo sbeginCoveo commented Apr 18, 2021

PR o'clock

Description

Closes #1105
Partially addresses #635

There are two themes to this PR.

  • One is merging the default values in the locals to avoid unnecessary lookups, which pollutes the codebase
  • The main theme is migrating count to for_each, so that we can use relevant key names for identification
module.terraform_aws_eks.aws_autoscaling_group.workers["node-1"]
module.terraform_aws_eks.aws_autoscaling_group.workers["node-spots-2"]
module.terraform_aws_eks.aws_autoscaling_group.workers["gpu-1"]
...
module.terraform_aws_eks.aws_launch_configuration.workers["node-1"]
module.terraform_aws_eks.aws_launch_configuration.workers["node-spots-2"]
module.terraform_aws_eks.aws_launch_configuration.workers["gpu-1"]

State migration process

There are no breaking changes in the interface (variables, outputs), But we have a tfstate migration required to move to the for_each index pattern.

The four main resources to migrate are:

  • aws_autoscaling_group.workers
  • aws_iam_instance_profile.workers
  • aws_launch_configuration.workers
  • random_pet.workers

This can be done a variety of ways, but Ive built a migration-file-generator which feeds on a terraform plan output and prints terraform state mv commands. Please tell me if you are interested, and I'll look into open sourcing it.

Example migration

terraform state mv module.terraform_aws_eks.aws_autoscaling_group.workers[0]  module.terraform_aws_eks.aws_autoscaling_group.workers["node-0"]
terraform state mv module.terraform_aws_eks.aws_autoscaling_group.workers[1]  module.terraform_aws_eks.aws_autoscaling_group.workers["node-1"]
terraform state mv module.terraform_aws_eks.aws_autoscaling_group.workers[2]  module.terraform_aws_eks.aws_autoscaling_group.workers["node-spots-0"]

terraform state mv module.terraform_aws_eks.aws_iam_instance_profile.workers[0]  module.terraform_aws_eks.aws_iam_instance_profile.workers["node-0"]
terraform state mv module.terraform_aws_eks.aws_iam_instance_profile.workers[1]  module.terraform_aws_eks.aws_iam_instance_profile.workers["node-1"]
terraform state mv module.terraform_aws_eks.aws_iam_instance_profile.workers[2]  module.terraform_aws_eks.aws_iam_instance_profile.workers["node-spots-0"]

terraform state mv module.terraform_aws_eks.aws_launch_configuration.workers[0]  module.terraform_aws_eks.aws_launch_configuration.workers["node-0"]
terraform state mv module.terraform_aws_eks.aws_launch_configuration.workers[1]  module.terraform_aws_eks.aws_launch_configuration.workers["node-1"]
terraform state mv module.terraform_aws_eks.aws_launch_configuration.workers[2]  module.terraform_aws_eks.aws_launch_configuration.workers["node-spots-0"]

terraform state mv module.terraform_aws_eks.random_pet.workers[0]  module.terraform_aws_eks.random_pet.workers["node-0"]
terraform state mv module.terraform_aws_eks.random_pet.workers[1]  module.terraform_aws_eks.random_pet.workers["node-1"]
terraform state mv module.terraform_aws_eks.random_pet.workers[2]  module.terraform_aws_eks.random_pet.workers["node-spots-0"]

Checklist

Copy link

@james-callahan james-callahan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this is a great first step to getting #858 integrated.

The one thing I'll comment on is: as the for_each key is not customizable, we still have the issues with re-ordering as worker groups are added/removed, which will require another state migration to fix.

local.tf Outdated Show resolved Hide resolved
outputs.tf Outdated Show resolved Hide resolved
@james-callahan
Copy link

Look like this needs rebasing now

aws_auth.tf Show resolved Hide resolved
local.tf Outdated Show resolved Hide resolved
workers.tf Outdated Show resolved Hide resolved
local.tf Outdated Show resolved Hide resolved
@sbeginCoveo
Copy link
Author

@barryib Would you like to add your review?

@sbeginCoveo
Copy link
Author

@james-callahan

The one thing I'll comment on is: as the for_each key is not customizable, we still have the issues with re-ordering as worker groups are added/removed, which will require another state migration to fix.

Ive played with the idea of adding a dedicated identifier that would be used as an index for maps, but the name is still used in name_prefix, forcing an ASG recreation, therefore requiring to be unique and unchanging as would the id. This forces clients to make their names/id unique and unchanging over time in order to profit from the resiliency introduced by for_each.

So at this point I don't see an advantage to adding an additional parameter. This set of changes can be merged as is and clients will have to replace their ASG by changing their names if they want to profit from the order preservation feature offered by for_each.

Tell me what you think :)

local.tf Outdated Show resolved Hide resolved
@sbeginCoveo
Copy link
Author

👋 were are we at regarding this feature? Its required to enable us to have unique keys on autoscaling groups

workers_launch_template.tf Outdated Show resolved Hide resolved
Copy link

@james-callahan james-callahan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM; would love to see this merged now.

Only comment I'll add is that the commits are a bit of a mess; I'm not sure if this repository does squash merging or what the story is.

@sbeginCoveo sbeginCoveo force-pushed the refactor-to-for-each branch from 1bafbdb to 77d9c69 Compare May 4, 2021 16:21
@barryib
Copy link
Member

barryib commented May 4, 2021

Hello @sbeginCoveo Thank you for your work.

We do plan to move from count to for_each. But to that, we planned to move code into sub-module and for_each. I can't merge this because it doesn't go into that direction. I'll keep this PR open for now, your work idea of merging locals is great and will contact you soon to add this in a next branch.

@barryib barryib added the on hold label May 4, 2021
@james-callahan
Copy link

But to that, we planned to move code into sub-module and for_each.

What do you mean by that?

I can't merge this because it doesn't go into that direction

Isn't this already an improvement? could the move to a submodule happen as a second step?


@sbeginCoveo seems the PR needs rebasing after another recent merge

@barryib
Copy link
Member

barryib commented May 5, 2021

But to that, we planned to move code into sub-module and for_each.

What do you mean by that?

I mean that we will move code into submodules and use for_each to loop over them.

Please see

#774 (comment)
#635
#1101

See also #1132 (comment)

@sbeginCoveo
Copy link
Author

I'm glad we finally decide on an outcome.

While I really like the direction you are suggesting with the submodules splits, it most probably won't make it soon in a release, and many of us are waiting on for_each to move some ASGs around.

@sbeginCoveo
Copy link
Author

sbeginCoveo commented May 13, 2021

well, I probably won't be maintaining this branch since we don't intend to merge it.
feel free to reuse the local merging strategy in another PR 👋

Thank you @james-callahan for your continued assistance

@innovia
Copy link

innovia commented Jun 16, 2021

@barryib When is this going to happen? What's the workaround - I currently have a huge mess when I need to remove and asg from the list.

I thought about decoupling this into a module like you said, but if you have any updates or pointers please let me know

@github-actions
Copy link

I'm going to lock this pull request because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems related to this change, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Nov 14, 2022
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Feat/bug: workers ASGs that are more resilient to unnecessary change
4 participants