Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Difference between node_groups and worker_groups #895

Closed
alexanderursu99 opened this issue May 28, 2020 · 14 comments
Closed

Difference between node_groups and worker_groups #895

alexanderursu99 opened this issue May 28, 2020 · 14 comments

Comments

@alexanderursu99
Copy link

Difference between node_groups and worker_groups

Hi, recently started using this module, and am happy with it so far.

However, I do have a question about the node_groups and worker_groups keys.

I see in the basic example the use of worker_groups, whereas in the managed node groups example there is node_groups, which seem to be very similar.

Is this something specific to the managed node groups? I don't seem to see if mentioned in the related docs.

Is the node_groups format just a way of working around this problem in the FAQ?

Side question: How many different ways are there of setting custom labels on nodes? I see the irsa example using the tags key, which is different from the managed node groups, and there's also this comment on the key kubelet_extra_args which is the only mention I see of setting k8s labels.

Not sure if creating an issue is the best place to ask questions about this module, but I guess I'm just confused with the documentation.

@d3bt3ch
Copy link

d3bt3ch commented May 28, 2020

@Alxander64 node_groups are aws eks managed nodes whereas worker_groups are self managed nodes. Among many one advantage of worker_groups is that you can use your custom AMI for the nodes.

@darrenfurr
Copy link

@Alxander64 - node groups are completely managed by AWS, so you don't even see the EC2 instances. Whereas worker groups you see them in EC2. As AWS says, "with worker groups the customer controls the data plane & AWS controls the Control Plane"

@dpiddockcmp
Copy link
Contributor

@darrenfurr That is not true. The EKS Managed Node Groups system creates a standard ASG in your account, with EC2 instances that you can see and access.

The MNG system is supposed to ease some of the lifecycle around upgrading nodes. Although they do not do this automatically for you. You still need to trigger updating of nodes when updates are available.

As for the side question:

  • tags are an AWS function. They appear in the console and can be used for things like billing controls or ABAC
  • Almost all tag-able AWS resources specify tags as a simple map(string). ASG are special and take a list of maps with three required keys: key, value and propagate_at_launch. All tags in the list are applied to the ASG. Only tags with propagate_at_launch = true get applied to EC2 instances launched by the ASG as well.
  • labels are a Kubernetes construct. They appear inside the kubernetes system and can be used for controlling where pods run via node affinity.

@darrenfurr
Copy link

@dpiddockcmp - Thanks for the clarification. It was my understanding that you would not see them under EC2 instances. We're using worker groups. So does that mean with managed node groups we do NOT need to install the AWS Cluster Autoscaler?

@dpiddockcmp
Copy link
Contributor

You still need to install a cluster autoscaler if you want the number of worker nodes to dynamically scale. MNG adds the necessary tags on the ASG to allow the cluster autoscaler to function.

@arthurio
Copy link

@dpiddockcmp Is it possible to taint the nodes when using node_groups? I don't see the kubelet_extra_args equivalent in node_groups.

@dpiddockcmp
Copy link
Contributor

dpiddockcmp commented Jun 26, 2020

No, there is currently very little control offered over managed node group instances by the AWS system. There is a request on their roadmap for a bit more flexibility aws/containers-roadmap#596 and specifically for taints: aws/containers-roadmap#864

You have to use traditional worker groups if you want to do anything more complicated than create nodes

@barryib
Copy link
Member

barryib commented Jul 20, 2020

@Alxander64 I think that @dpiddockcmp answered to your question. Closing this issue for now. Feel free to re-open if needed.

@barryib barryib closed this as completed Jul 20, 2020
@pranas
Copy link
Contributor

pranas commented Sep 7, 2020

I think this info should be included somewhere in the docs. I was too researching what was the difference between using node_groups and worker_groups and which one is preferred way to go. Luckily, I found this issue but I did search the README first and expected it to be there.

@barryib
Copy link
Member

barryib commented Sep 8, 2020

I think this info should be included somewhere in the docs. I was too researching what was the difference between using node_groups and worker_groups and which one is preferred way to go. Luckily, I found this issue but I did search the README first and expected it to be there.

@pranas Can you please open an PR with an update you suggested ?

@pranas
Copy link
Contributor

pranas commented Sep 8, 2020

Sure, I can write something up when I find free time.

@iamhoges
Copy link

iamhoges commented Sep 9, 2020

Can we use the module with node_groups (managed node) and worker_groups_launch_template or worker_groups together (self-managed node)?

For example:

module "eks" {
  source          = "../.."
  cluster_name    = local.cluster_name
  cluster_version = "1.17"
  subnets         = module.vpc.private_subnets

  tags = {
    Environment = "test"
    GithubRepo  = "terraform-aws-eks"
    GithubOrg   = "terraform-aws-modules"
  }

  vpc_id = module.vpc.vpc_id

  node_groups = {
    example = {
      desired_capacity = 1
      max_capacity     = 10
      min_capacity     = 1

      instance_type = "m5.large"
      k8s_labels = {
        Environment = "test"
        GithubRepo  = "terraform-aws-eks"
        GithubOrg   = "terraform-aws-modules"
      }
      additional_tags = {
        ExtraTag = "example"
      }
    }
  }
  
  worker_groups_launch_template = [
    {
      name                    = "spot-1"
      override_instance_types = ["m5.large", "m5a.large", "m5d.large", "m5ad.large"]
      spot_instance_pools     = 4
      asg_max_size            = 5
      asg_desired_capacity    = 5
      kubelet_extra_args      = "--node-labels=node.kubernetes.io/lifecycle=spot"
      public_ip               = true
    },
  ]
  
}

@dpiddockcmp
Copy link
Contributor

Yes, all three can be used together in a single cluster.

@github-actions
Copy link

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Nov 24, 2022
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

8 participants