Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Not able to create worker group into existing eks cluster (conditional creation) #861

Closed
tarunptala opened this issue May 6, 2020 · 6 comments

Comments

@tarunptala
Copy link

tarunptala commented May 6, 2020

Not able to conditionally create eks resources (worker group) to existing cluster created by this module. We came across Conditional creation section - terraform-aws-eks

FYI - we have another module called eks which creates this cluster using terraform-aws-eks now we are trying to find ways to create worker group by creating separate module (Conditional Creation). I don't want worker group to be part eks cluster creation module.

What is the current behaviour?

Looks like It is trying to create new cluster as opposed to creating only worker group for existing cluster.

If this is a bug, how to reproduce? Please include a code sample if relevant.

My configuration looks like - cluster_name we are passing through variable to discover existing cluster.

data "aws_eks_cluster" "cluster" {
  count = var.create_eks ? 1 : 0
  name  = var.cluster_name
}

data "aws_eks_cluster_auth" "cluster" {
  count = var.create_eks ? 1 : 0
  name  = var.cluster_name
}

provider "kubernetes" {
  host                   = element(concat(data.aws_eks_cluster.cluster[*].endpoint, list("")), 0)
  cluster_ca_certificate = base64decode(element(concat(data.aws_eks_cluster.cluster[*].certificate_authority.0.data, list("")), 0))
  token                  = element(concat(data.aws_eks_cluster_auth.cluster[*].token, list("")), 0)
  load_config_file       = false
  version                = "~> 1.11"
}

module "eks-worker" {
  source     = "terraform-aws-modules/eks/aws"
  version    = "~> 11.0.0"
  create_eks = false

  tags = {
     Name        = "${var.env}-${var.workload_type}-eks-worker-nodes"
     Environment = var.env
     Terraform   = "true"
   }

  worker_groups = [
    {
      name                          = "${var.env}-${var.workload_type}-eks-worker-nodes"
      instance_type                 = var.instance_type
      asg_min_size                  = var.asg_min_size
      asg_max_size                  = var.asg_max_size
      key_name                      = var.key_name
      map_roles                     = var.map_roles

      tags = [{
        key                 = "k8s.io/cluster-autoscaler/${var.cluster_name}"
        value               = "owned"
        propagate_at_launch = true
        },
        {
          key                 = "k8s.io/cluster-autoscaler/enabled"
          value               = "true"
          propagate_at_launch = true
      }]

    }
  ]

}

What's the expected behavior?

It should allow us to create worker group within existing cluster.

Environment details

  • Affected module version: 11.0.0
  • OS: Mac
  • Terraform version: v0.12.24

Any other relevant info

Using Terragrunt

@dpiddockcmp
Copy link
Contributor

This module creates a cluster. There is currently no way to use it to add workers to an existing cluster.

There have been suggestions in the past about pulling the module apart to allow for reusable components, whilst keeping the top level module as a one-shot solution. A recent discussion about it happened in #774

@tarunptala
Copy link
Author

@dpiddockcmp thanks for acknowledging it. keep a watch on #774

@stale
Copy link

stale bot commented Aug 9, 2020

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

@stale stale bot added the stale label Aug 9, 2020
@stale
Copy link

stale bot commented Sep 9, 2020

This issue has been automatically closed because it has not had recent activity since being marked as stale.

@stale stale bot closed this as completed Sep 9, 2020
@willquill
Copy link

It is possible to conditionally create workers using this module. I tested my method today, and it works.

You need to use a conditional expression as outlined here: https://www.terraform.io/docs/language/expressions/conditionals.html

Instead of defining a worker group list in the module, put this in the module:

worker_groups = var.create_workers ? local.worker_groups : []

Your actual worker list will be inside the local. And if you set your create_workers boolean variable to true, it will use the local list. If the variable is set to false, it will use an empty list (AKA no worker groups at all).

See below for a complete example:

// Deploys the cluster and worker nodes
module "eks" {
  source          = "terraform-aws-modules/eks/aws"
  version         = "17.20.0"
  cluster_name    = "${var.environment}-eks"
  cluster_version = "1.21"

  vpc_id                    = var.vpc_id
  subnets                   = var.eks_subnets
  cluster_enabled_log_types = ["api", "audit", "authenticator", "controllerManager", "scheduler"]

  cluster_endpoint_private_access = true
  cluster_endpoint_public_access  = true
  manage_worker_iam_resources     = false
  manage_cluster_iam_resources    = false
  cluster_iam_role_name           = "eks-cluster"
  cluster_create_security_group   = true
  worker_create_security_group    = true
  write_kubeconfig                = false

  // This eliminates the need to create a resource for aws_iam_openid_connect_provider
  enable_irsa              = true
  openid_connect_audiences = ["sts.amazonaws.com"]

  tags = var.eks_tags

  worker_groups = var.create_workers ? local.worker_groups : []

  map_users = var.eks_map_users
  map_roles = var.eks_map_roles
}

locals {
  worker_groups = [
    {
      name                      = "general-1_21"
      ami_id                    = aws_ami_copy.eks_worker_1_21.id
      instance_type             = var.eks_worker_instance_type
      asg_max_size              = var.asg_max_size
      asg_min_size              = var.asg_min_size
      asg_desired_capacity      = var.asg_desired_capacity
      iam_instance_profile_name = "eks-worker-iamprofile"
      root_encrypted            = true
      ebs_optimized             = true
      key_name                  = var.key_name
      bootstrap_extra_args      = "--use-max-pods false"
      kubelet_extra_args        = "--node-labels=node_group=general"
      tags = [
        {
          "key"                 = "k8s.io/cluster-autoscaler/enabled"
          "propagate_at_launch" = "false"
          "value"               = "true"
        },
        {
          "key"                 = "k8s.io/cluster-autoscaler/${var.environment}-eks"
          "propagate_at_launch" = "false"
          "value"               = "true"
        }
      ]
      # additional_userdata  = "something here"
    }
  ]
}

And while this next part has nothing to do with the solution, it's a handy little tip for how that ami_id_copy was created in my example worker_group:

// Find a specific AMI
data "aws_ami" "eks_worker_base_1_21" {
  filter {
    name   = "name"
    values = ["amazon-eks-node-1.21-v20211013"]
  }

  # Owner ID of AWS EKS team
  owners = ["amazon"]
}

// Creates a private copy of the public AMI
resource "aws_ami_copy" "eks_worker_1_21" {
  name              = "${data.aws_ami.eks_worker_base_1_21.name}-${var.env}-encrypted"
  description       = "Encrypted version of EKS worker AMI for ${var.environment}-eks cluster"
  source_ami_id     = data.aws_ami.eks_worker_base_1_21.id
  source_ami_region = var.source_ami_region
  encrypted         = true

  tags = merge(var.tags, { "Name" = "${data.aws_ami.eks_worker_base_1_21.name}-${var.env}-encrypted" })
}

@github-actions
Copy link

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Nov 17, 2022
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants