-
-
Notifications
You must be signed in to change notification settings - Fork 4.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Deployment never completes but cluster is active #777
Comments
Do you have wget installed? The command that gets run is: https://github.com/terraform-aws-modules/terraform-aws-eks/blob/master/variables.tf#L204 I'm not sure how the module would behave in terms of redirecting stderr but testing the command with a non existing command on my machine:
it runs forever as far as I can see. |
If wget will be not available then it will rather fail. In this case I suggest to check wget version as there is known issue with SSL compatibility or verify does endpoint is really available from place where you running terraform code (in case of private envpoint it must be same VPC, in case of public endpoint you must ensure about whitelist value configured under terraform-aws-eks/variables.tf Lines 249 to 253 in 4c0c4c4
Please check also #757 |
I see the problem I think:
|
@js-timbirkett That seems to have fixed it. I reverted to v8.1.0 and then went back to v9.0.0 removed the one line cluster_endpoint_public_access = false It also didn't like, so I had to fix that. So everything looks like it's provisioned, but it does throw an error about the auth map already exists. I did clear out the authmap that it drops in the root and deleted .terraform it it still throws that error at the end. |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
This issue has been automatically closed because it has not had recent activity since being marked as stale. |
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further. |
I have issues
The same code was use to build two other clusters and they are working. There was some lag from when the initial clusters were built.
I'm submitting a...
What is the current behavior?
The cluster deploy never completes with worker nodes and ASG's. The Cluster state does show active. Seems like the ASG' and worker nodes are never deployed.
If this is a bug, how to reproduce? Please include a code sample if relevant.
When running apply, the cluster is built and in an active state but the node groups are never created, there are no ASG's or launch templates.
It's stuck in a loop
module.eks-cluster.null_resource.wait_for_cluster[0]: Still creating...
module.eks-cluster.null_resource.wait_for_cluster[0]: Still creating...
module.eks-cluster.null_resource.wait_for_cluster[0]: Still creating...
module.eks-cluster.null_resource.wait_for_cluster[0]: Still creating...
Main.tf
module "eks-cluster" {
source = "terraform-aws-modules/eks/aws"
version = "9.0.0"
cluster_name = "qa-eps-eks"
subnets = "${data.aws_subnet_ids.default.ids}"
vpc_id = "${var.vpc_id}"
cluster_endpoint_private_access = true
cluster_endpoint_public_access = false
cluster_security_group_id = "${data.aws_security_group.default.id}"
worker_groups = [
{
name = "qa-eks-workers"
instance_type = "m4.large"
key_name = "qa"
asg_min_size = 2
asg_desired_size = 2
asg_max_size = 5
autoscaling_enabled = true
tags = [{
propagate_at_launch = true
key = "terraform"
value = "true"
}]
}
]
tags = {
environment = "qa"
terraform = "true"
}
}
What's the expected behavior?
The expected behavior is the cluster is built along with ASG and worker nodes.
Are you able to fix this problem and submit a PR? Link here if you have already.
Environment details
Any other relevant info
Module v9.0.0
The text was updated successfully, but these errors were encountered: