Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

EKS node group deletion fails due to "Ec2SecurityGroupDeletionFailure: DependencyViolation - resource has a dependent object. Resource IDs: [sg-example] #20405

Closed
harish422 opened this issue Aug 2, 2021 · 3 comments · Fixed by #26553
Labels
bug Addresses a defect in current functionality. service/ec2 Issues and PRs that pertain to the ec2 service. service/eks Issues and PRs that pertain to the eks service. service/iam Issues and PRs that pertain to the iam service.
Milestone

Comments

@harish422
Copy link

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Terraform CLI and Terraform AWS Provider Version

Terraform v0.12.29

  • provider.aws v3.37.0
  • provider.http v2.1.0

Your version of Terraform is out of date! The latest version
is 1.0.3. You can update by downloading from https://www.terraform.io/downloads.html

Affected Resource(s)

Error: error waiting for EKS Node Group (HRIT:CustA-HR) deletion: Ec2SecurityGroupDeletionFailure: DependencyViolation - resource has a dependent object. Resource IDs: [sg-example]

Terraform Configuration Files

Please include all Terraform configurations required to reproduce the bug. Bug reports without a functional reproduction may be closed without investigation.

data "http" "workstation-external-ip" {
  url = "http://ipv4.icanhazip.com"
}

# Override with variable or hardcoded value if necessary
locals {
  workstation-external-cidr = "${chomp(data.http.workstation-external-ip.body)}/32"
}


# Provider Configuration
#

provider "aws" {
  region  = var.region
  version = ">= 2.38.0"
}

# Using these data sources allows the configuration to be
# generic for any region.
data "aws_region" "current" {}

data "aws_availability_zones" "available" {}

# Not required: currently used in conjuction with using
# icanhazip.com to determine local workstation external IP
# to open EC2 Security Group access to the Kubernetes cluster.
# See workstation-external-ip.tf for additional information.
provider "http" {}


# VPC Resources
#  * VPC
#  * Subnets
#  * Internet Gateway
#  * Route Table
#


resource "aws_route_table_association" "HR_K8s_1" {
 subnet_id = "${var.hr_k8s_1_id}"
 route_table_id = "${var.route_table_id}"
}


resource "aws_route_table_association" "HR_K8s_2" {
 subnet_id = "${var.hr_k8s_2_id}"
 route_table_id = "${var.route_table_id}"
}

resource "aws_route_table_association" "IT_K8s_1" {
 subnet_id = "${var.it_k8s_1_id}"
 route_table_id = "${var.route_table_id}"
}


resource "aws_route_table_association" "IT_K8s_2" {
 subnet_id = "${var.it_k8s_2_id}"
 route_table_id = "${var.route_table_id}"
}

resource "aws_route_table_association" "Sales_K8s_1" {
 subnet_id = "${var.sales_k8s_1_id}"
 route_table_id = "${var.route_table_id}"
}

resource "aws_route_table_association" "Sales_K8s_2" {
 subnet_id = "${var.sales_k8s_2_id}"
 route_table_id = "${var.route_table_id}"
}

resource "aws_route_table_association" "Marketing_K8s_1" {
 subnet_id = "${var.marketing_k8s_1_id}"
 route_table_id = "${var.route_table_id}"
}

resource "aws_route_table_association" "Marketing_K8s_2" {
 subnet_id = "${var.marketing_k8s_2_id}"
 route_table_id = "${var.route_table_id}"
}


# eks cluster security group configuration

#HRIT Cluster security config

resource "aws_security_group" "HRIT-cluster" {
  name        = "terraform-eks-HRIT-cluster"
  description = "Cluster communication with worker nodes"
  vpc_id      = "${var.vpc_id}"

  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }

  tags = {
    Name = "terraform-eks-HRIT"
  }
}

resource "aws_security_group_rule" "HRIT-cluster-ingress-workstation-https" {
  cidr_blocks       = [local.workstation-external-cidr]
  description       = "Allow workstation to communicate with the cluster API Server"
  from_port         = 443
  protocol          = "tcp"
  security_group_id = aws_security_group.HRIT-cluster.id
  to_port           = 443
  type              = "ingress"
}

# HRIT Cluster creation

resource "aws_iam_role" "HRIT-cluster" {
  name = "terraform-eks-HRIT-cluster"

  assume_role_policy = <<POLICY
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Service": "eks.amazonaws.com"
      },
      "Action": "sts:AssumeRole"
    }
  ]
}
POLICY
}

resource "aws_iam_role_policy_attachment" "HRIT-cluster-AmazonEKSClusterPolicy" {
  policy_arn = "arn:aws:iam::aws:policy/AmazonEKSClusterPolicy"
  role       = aws_iam_role.HRIT-cluster.name
}

resource "aws_iam_role_policy_attachment" "HRIT-cluster-AmazonEKSServicePolicy" {
  policy_arn = "arn:aws:iam::aws:policy/AmazonEKSServicePolicy"
  role       = aws_iam_role.HRIT-cluster.name
}

resource "aws_eks_cluster" "HRIT" {
  name     = "HRIT"
  role_arn = aws_iam_role.HRIT-cluster.arn
  version = var.k8s_version

  vpc_config {
    endpoint_private_access = true
    endpoint_public_access = true
    public_access_cidrs = [local.workstation-external-cidr]
    security_group_ids = [aws_security_group.HRIT-cluster.id]
    subnet_ids         = ["${var.hr_k8s_1_id}", "${var.hr_k8s_2_id}", "${var.it_k8s_1_id}", "${var.it_k8s_2_id}"]
  }

  depends_on = [
    aws_iam_role_policy_attachment.HRIT-cluster-AmazonEKSClusterPolicy,
    aws_iam_role_policy_attachment.HRIT-cluster-AmazonEKSServicePolicy,
  ]
}

## HR worker node config

resource "aws_iam_role" "HR-node" {
  name = "terraform-eks-HR-node"

  assume_role_policy = <<POLICY
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Service": "ec2.amazonaws.com"
      },
      "Action": "sts:AssumeRole"
    }
  ]
}
POLICY
}

resource "aws_iam_role_policy_attachment" "HR-node-AmazonEKSWorkerNodePolicy" {
  policy_arn = "arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy"
  role       = aws_iam_role.HR-node.name
}

resource "aws_iam_role_policy_attachment" "HR-node-AmazonEKS_CNI_Policy" {
  policy_arn = "arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy"
  role       = aws_iam_role.HR-node.name
}

resource "aws_iam_role_policy_attachment" "HR-node-AmazonEC2ContainerRegistryReadOnly" {
  policy_arn = "arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly"
  role       = aws_iam_role.HR-node.name
}

resource "aws_eks_node_group" "CustA-HR" {
  disk_size       = var.custa_hr_ng_disk_size
  instance_types  = ["c5.4xlarge"]
  ami_type        = "AL2_x86_64"
  cluster_name    = aws_eks_cluster.HRIT.name
  node_group_name = "CustA-HR"
  node_role_arn   = aws_iam_role.HR-node.arn
  subnet_ids      = ["${var.hr_k8s_1_id}", "${var.hr_k8s_2_id}"]
  labels = {
    "Dept" = "HR"
    }
  remote_access {
    ec2_ssh_key = "XYZ"

  }
  scaling_config {
    desired_size = var.custa_hr_ng_desired_size
    max_size     = var.custa_hr_ng_max_size
    min_size     = var.custa_hr_ng_min_size
  }

  depends_on = [
    aws_iam_role_policy_attachment.HR-node-AmazonEKSWorkerNodePolicy,
    aws_iam_role_policy_attachment.HR-node-AmazonEKS_CNI_Policy,
    aws_iam_role_policy_attachment.HR-node-AmazonEC2ContainerRegistryReadOnly,
    aws_security_group.HRIT-cluster,
  ]
}

## IT worker node config

resource "aws_iam_role" "IT-node" {
  name = "terraform-eks-IT-node"

  assume_role_policy = <<POLICY
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Service": "ec2.amazonaws.com"
      },
      "Action": "sts:AssumeRole"
    }
  ]
}
POLICY
}

resource "aws_iam_role_policy_attachment" "IT-node-AmazonEKSWorkerNodePolicy" {
  policy_arn = "arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy"
  role       = aws_iam_role.IT-node.name
}

resource "aws_iam_role_policy_attachment" "IT-node-AmazonEKS_CNI_Policy" {
  policy_arn = "arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy"
  role       = aws_iam_role.IT-node.name
}

resource "aws_iam_role_policy_attachment" "IT-node-AmazonEC2ContainerRegistryReadOnly" {
  policy_arn = "arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly"
  role       = aws_iam_role.IT-node.name
}

resource "aws_eks_node_group" "CustA-IT" {
  disk_size       = var.custa_it_ng_disk_size
  instance_types  = ["c5.4xlarge"]
  ami_type        = "AL2_x86_64"
  cluster_name    = aws_eks_cluster.HRIT.name
  node_group_name = "CustA-IT"
  node_role_arn   = aws_iam_role.IT-node.arn
  subnet_ids      = ["${var.it_k8s_1_id}", "${var.it_k8s_2_id}"]
  labels = {
    "Dept" = "IT"
    }
  remote_access {
    ec2_ssh_key = "XYZ"

  }
  scaling_config {
    desired_size = var.custa_it_ng_desired_size
    max_size     = var.custa_it_ng_max_size
    min_size     = var.custa_it_ng_min_size
  }

  depends_on = [
    aws_iam_role_policy_attachment.IT-node-AmazonEKSWorkerNodePolicy,
    aws_iam_role_policy_attachment.IT-node-AmazonEKS_CNI_Policy,
    aws_iam_role_policy_attachment.IT-node-AmazonEC2ContainerRegistryReadOnly,
    aws_security_group.HRIT-cluster,
  ]
}

#Sales security group configuration

resource "aws_security_group" "Sales-cluster" {
  name        = "terraform-eks-Sales-cluster"
  description = "Cluster communication with worker nodes"
  vpc_id      = "${var.vpc_id}"

  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }

  tags = {
    Name = "terraform-eks-Sales"
  }
}

resource "aws_security_group_rule" "Sales-cluster-ingress-workstation-https" {
  cidr_blocks       = [local.workstation-external-cidr]
  description       = "Allow workstation to communicate with the cluster API Server"
  from_port         = 443
  protocol          = "tcp"
  security_group_id = aws_security_group.Sales-cluster.id
  to_port           = 443
  type              = "ingress"
}


# Sales Cluster

resource "aws_iam_role" "Sales-cluster" {
  name = "terraform-eks-Sales-cluster"

  assume_role_policy = <<POLICY
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Service": "eks.amazonaws.com"
      },
      "Action": "sts:AssumeRole"
    }
  ]
}
POLICY
}

resource "aws_iam_role_policy_attachment" "Sales-cluster-AmazonEKSClusterPolicy" {
  policy_arn = "arn:aws:iam::aws:policy/AmazonEKSClusterPolicy"
  role       = aws_iam_role.Sales-cluster.name
}

resource "aws_iam_role_policy_attachment" "Sales-cluster-AmazonEKSServicePolicy" {
  policy_arn = "arn:aws:iam::aws:policy/AmazonEKSServicePolicy"
  role       = aws_iam_role.Sales-cluster.name
}


resource "aws_eks_cluster" "Sales" {
  name     = "Sales"
  role_arn = aws_iam_role.Sales-cluster.arn
  version = var.k8s_version

  vpc_config {
    endpoint_private_access = true
    endpoint_public_access = true
    public_access_cidrs = [local.workstation-external-cidr]
    security_group_ids = [aws_security_group.Sales-cluster.id]
    subnet_ids         = ["${var.sales_k8s_1_id}", "${var.sales_k8s_2_id}"]
  }

  depends_on = [
    aws_iam_role_policy_attachment.Sales-cluster-AmazonEKSClusterPolicy,
    aws_iam_role_policy_attachment.Sales-cluster-AmazonEKSServicePolicy,
  ]
}

## Sales worker node config

resource "aws_iam_role" "Sales-node" {
  name = "terraform-eks-Sales-node"

  assume_role_policy = <<POLICY
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Service": "ec2.amazonaws.com"
      },
      "Action": "sts:AssumeRole"
    }
  ]
}
POLICY
}

resource "aws_iam_role_policy_attachment" "Sales-node-AmazonEKSWorkerNodePolicy" {
  policy_arn = "arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy"
  role       = aws_iam_role.Sales-node.name
}

resource "aws_iam_role_policy_attachment" "Sales-node-AmazonEKS_CNI_Policy" {
  policy_arn = "arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy"
  role       = aws_iam_role.Sales-node.name
}

resource "aws_iam_role_policy_attachment" "Sales-node-AmazonEC2ContainerRegistryReadOnly" {
  policy_arn = "arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly"
  role       = aws_iam_role.Sales-node.name
}

resource "aws_eks_node_group" "CustA-Sales" {
  disk_size       = var.custa_sales_ng_disk_size
  instance_types  = ["t3.2xlarge"]
  ami_type        = "AL2_x86_64"
  cluster_name    = aws_eks_cluster.Sales.name
  node_group_name = "CustA-Sales"
  node_role_arn   = aws_iam_role.Sales-node.arn
  subnet_ids      = ["${var.sales_k8s_1_id}", "${var.sales_k8s_2_id}"]
  labels = {
    "Dept" = "Sales"
    }
  remote_access {
    ec2_ssh_key = "XYZ"

  }
  scaling_config {
    desired_size = var.custa_sales_ng_desired_size
    max_size     = var.custa_sales_ng_max_size
    min_size     = var.custa_sales_ng_min_size
  }

  depends_on = [
    aws_iam_role_policy_attachment.Sales-node-AmazonEKSWorkerNodePolicy,
    aws_iam_role_policy_attachment.Sales-node-AmazonEKS_CNI_Policy,
    aws_iam_role_policy_attachment.Sales-node-AmazonEC2ContainerRegistryReadOnly,
    aws_security_group.Sales-cluster,
  ]
}



#Marketing security group configuration

resource "aws_security_group" "Marketing-cluster" {
  name        = "terraform-eks-Marketing-cluster"
  description = "Cluster communication with worker nodes"
  vpc_id      = "${var.vpc_id}"

  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }

  tags = {
    Name = "terraform-eks-Marketing"
  }
}

resource "aws_security_group_rule" "Marketing-cluster-ingress-workstation-https" {
  cidr_blocks       = [local.workstation-external-cidr]
  description       = "Allow workstation to communicate with the cluster API Server"
  from_port         = 443
  protocol          = "tcp"
  security_group_id = aws_security_group.Marketing-cluster.id
  to_port           = 443
  type              = "ingress"
}

# Marketing cluster


resource "aws_iam_role" "Marketing-cluster" {
  name = "terraform-eks-Marketing-cluster"

  assume_role_policy = <<POLICY
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Service": "eks.amazonaws.com"
      },
      "Action": "sts:AssumeRole"
    }
  ]
}
POLICY
}

resource "aws_iam_role_policy_attachment" "Marketing-cluster-AmazonEKSClusterPolicy" {
  policy_arn = "arn:aws:iam::aws:policy/AmazonEKSClusterPolicy"
  role       = aws_iam_role.Marketing-cluster.name
}

resource "aws_iam_role_policy_attachment" "Marketing-cluster-AmazonEKSServicePolicy" {
  policy_arn = "arn:aws:iam::aws:policy/AmazonEKSServicePolicy"
  role       = aws_iam_role.Marketing-cluster.name
}

resource "aws_eks_cluster" "Marketing" {
  name     = "Marketing"
  role_arn = aws_iam_role.Marketing-cluster.arn
  version = var.k8s_version

  vpc_config {
    endpoint_private_access = true
    endpoint_public_access = true
    public_access_cidrs = [local.workstation-external-cidr]
    security_group_ids = [aws_security_group.Marketing-cluster.id]
    subnet_ids         = ["${var.marketing_k8s_1_id}", "${var.marketing_k8s_2_id}"]
  }

  depends_on = [
    aws_iam_role_policy_attachment.Marketing-cluster-AmazonEKSClusterPolicy,
    aws_iam_role_policy_attachment.Marketing-cluster-AmazonEKSServicePolicy,
  ]
}

## Marketing worker node config

resource "aws_iam_role" "Marketing-node" {
  name = "terraform-eks-Marketing-node"

  assume_role_policy = <<POLICY
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Service": "ec2.amazonaws.com"
      },
      "Action": "sts:AssumeRole"
    }
  ]
}
POLICY
}

resource "aws_iam_role_policy_attachment" "Marketing-node-AmazonEKSWorkerNodePolicy" {
  policy_arn = "arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy"
  role       = aws_iam_role.Marketing-node.name
}

resource "aws_iam_role_policy_attachment" "Marketing-node-AmazonEKS_CNI_Policy" {
  policy_arn = "arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy"
  role       = aws_iam_role.Marketing-node.name
}

resource "aws_iam_role_policy_attachment" "Marketing-node-AmazonEC2ContainerRegistryReadOnly" {
  policy_arn = "arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly"
  role       = aws_iam_role.Marketing-node.name
}

resource "aws_eks_node_group" "CustA-Marketing" {
  disk_size       = var.custa_marketing_ng_disk_size
  instance_types  = ["c5.4xlarge"]
  ami_type        = "AL2_x86_64"
  cluster_name    = aws_eks_cluster.Marketing.name
  node_group_name = "CustA-Marketing"
  node_role_arn   = aws_iam_role.Marketing-node.arn
  subnet_ids      = ["${var.marketing_k8s_1_id}", "${var.marketing_k8s_2_id}"]
  labels = {
    "Dept" = "Marketing"
    }
  remote_access {
    ec2_ssh_key = "XYZ"

  }
  scaling_config {
    desired_size = var.custa_marketing_ng_desired_size
    max_size     = var.custa_marketing_ng_max_size
    min_size     = var.custa_marketing_ng_min_size
  }

  depends_on = [
    aws_iam_role_policy_attachment.Marketing-node-AmazonEKSWorkerNodePolicy,
    aws_iam_role_policy_attachment.Marketing-node-AmazonEKS_CNI_Policy,
    aws_iam_role_policy_attachment.Marketing-node-AmazonEC2ContainerRegistryReadOnly,
    aws_security_group.Marketing-cluster,
  ]
}

Debug Output

Panic Output

Expected Behavior

Security groups should be deleted gracefully(though ENI's are linked to this SG) followed by node group deletion.

Actual Behavior

  1. Security group deletion fails. That results in nodegroup deletion failure since the security group still exists.
  2. When I tried to delete the security group from AWS console, I noticed security group couldn't be deleted as there were some ENI's still linked to this security group.
  3. manually deleting the ENI's and running"terraform destroy" helps.

Steps to Reproduce

terraform apply
terraform destroy

Important Factoids

References

  • #0000
@github-actions github-actions bot added needs-triage Waiting for first response or review from a maintainer. service/ec2 Issues and PRs that pertain to the ec2 service. service/eks Issues and PRs that pertain to the eks service. service/iam Issues and PRs that pertain to the iam service. labels Aug 2, 2021
@harish422
Copy link
Author

Below is the destroy console output:

aws_security_group_rule.HRIT-cluster-ingress-workstation-https: Destroying... [id=sgrule-1571703115]
aws_route_table_association.HR_K8s_2: Destroying... [id=rtbassoc-03f94311c7ba33161]
aws_security_group_rule.Marketing-cluster-ingress-workstation-https: Destroying... [id=sgrule-51171598]
aws_security_group_rule.Sales-cluster-ingress-workstation-https: Destroying... [id=sgrule-497293064]
aws_route_table_association.HR_K8s_1: Destroying... [id=rtbassoc-09e0dc9234365d0f2]
aws_route_table_association.IT_K8s_1: Destroying... [id=rtbassoc-097e9aa1644f674bf]
aws_route_table_association.Marketing_K8s_1: Destroying... [id=rtbassoc-0e1e95ac378434207]
aws_eks_node_group.CustA-HR: Destroying... [id=HRIT:CustA-HR]
aws_route_table_association.Sales_K8s_1: Destroying... [id=rtbassoc-035d13faffdba06a1]
aws_eks_node_group.CustA-Marketing: Destroying... [id=Marketing:CustA-Marketing]
aws_route_table_association.HR_K8s_1: Destruction complete after 6s
aws_route_table_association.Sales_K8s_2: Destroying... [id=rtbassoc-0e42b4df7a4336f90]
aws_route_table_association.HR_K8s_2: Destruction complete after 6s
aws_route_table_association.IT_K8s_1: Destruction complete after 6s
aws_route_table_association.IT_K8s_2: Destroying... [id=rtbassoc-009417f8c59015db1]
aws_eks_node_group.CustA-IT: Destroying... [id=HRIT:CustA-IT]
aws_route_table_association.Marketing_K8s_1: Destruction complete after 6s
aws_route_table_association.Marketing_K8s_2: Destroying... [id=rtbassoc-04c3457f8677c663f]
aws_route_table_association.Sales_K8s_1: Destruction complete after 6s
aws_eks_node_group.CustA-Sales: Destroying... [id=Sales:CustA-Sales]
aws_security_group_rule.Sales-cluster-ingress-workstation-https: Destruction complete after 7s
aws_security_group_rule.Marketing-cluster-ingress-workstation-https: Destruction complete after 7s
aws_security_group_rule.HRIT-cluster-ingress-workstation-https: Destruction complete after 7s
aws_route_table_association.Marketing_K8s_2: Destruction complete after 1s
aws_route_table_association.IT_K8s_2: Destruction complete after 1s
aws_route_table_association.Sales_K8s_2: Destruction complete after 1s
aws_eks_node_group.CustA-HR: Still destroying... [id=HRIT:CustA-HR, 10s elapsed]
aws_eks_node_group.CustA-Marketing: Still destroying... [id=Marketing:CustA-Marketing, 10s elapsed]
aws_eks_node_group.CustA-IT: Still destroying... [id=HRIT:CustA-IT, 10s elapsed]
aws_eks_node_group.CustA-Sales: Still destroying... [id=Sales:CustA-Sales, 10s elapsed]
aws_eks_node_group.CustA-HR: Still destroying... [id=HRIT:CustA-HR, 20s elapsed]
aws_eks_node_group.CustA-Marketing: Still destroying... [id=Marketing:CustA-Marketing, 20s elapsed]
aws_eks_node_group.CustA-IT: Still destroying... [id=HRIT:CustA-IT, 20s elapsed]
aws_eks_node_group.CustA-Sales: Still destroying... [id=Sales:CustA-Sales, 20s elapsed]
aws_eks_node_group.CustA-HR: Still destroying... [id=HRIT:CustA-HR, 30s elapsed]
aws_eks_node_group.CustA-Marketing: Still destroying... [id=Marketing:CustA-Marketing, 30s elapsed]
aws_eks_node_group.CustA-IT: Still destroying... [id=HRIT:CustA-IT, 30s elapsed]
aws_eks_node_group.CustA-Sales: Still destroying... [id=Sales:CustA-Sales, 30s elapsed]
aws_eks_node_group.CustA-HR: Still destroying... [id=HRIT:CustA-HR, 40s elapsed]
aws_eks_node_group.CustA-Marketing: Still destroying... [id=Marketing:CustA-Marketing, 40s elapsed]
aws_eks_node_group.CustA-IT: Still destroying... [id=HRIT:CustA-IT, 40s elapsed]
aws_eks_node_group.CustA-Sales: Still destroying... [id=Sales:CustA-Sales, 40s elapsed]
aws_eks_node_group.CustA-HR: Still destroying... [id=HRIT:CustA-HR, 50s elapsed]
aws_eks_node_group.CustA-Marketing: Still destroying... [id=Marketing:CustA-Marketing, 50s elapsed]
aws_eks_node_group.CustA-IT: Still destroying... [id=HRIT:CustA-IT, 50s elapsed]
aws_eks_node_group.CustA-Sales: Still destroying... [id=Sales:CustA-Sales, 50s elapsed]
aws_eks_node_group.CustA-HR: Still destroying... [id=HRIT:CustA-HR, 1m0s elapsed]
aws_eks_node_group.CustA-Marketing: Still destroying... [id=Marketing:CustA-Marketing, 1m0s elapsed]
aws_eks_node_group.CustA-IT: Still destroying... [id=HRIT:CustA-IT, 1m0s elapsed]
aws_eks_node_group.CustA-Sales: Still destroying... [id=Sales:CustA-Sales, 1m0s elapsed]
aws_eks_node_group.CustA-HR: Still destroying... [id=HRIT:CustA-HR, 1m10s elapsed]
aws_eks_node_group.CustA-Marketing: Still destroying... [id=Marketing:CustA-Marketing, 1m10s elapsed]
aws_eks_node_group.CustA-IT: Still destroying... [id=HRIT:CustA-IT, 1m10s elapsed]
aws_eks_node_group.CustA-Sales: Still destroying... [id=Sales:CustA-Sales, 1m10s elapsed]
aws_eks_node_group.CustA-HR: Still destroying... [id=HRIT:CustA-HR, 1m20s elapsed]
aws_eks_node_group.CustA-Marketing: Still destroying... [id=Marketing:CustA-Marketing, 1m20s elapsed]
aws_eks_node_group.CustA-IT: Still destroying... [id=HRIT:CustA-IT, 1m20s elapsed]
aws_eks_node_group.CustA-Sales: Still destroying... [id=Sales:CustA-Sales, 1m20s elapsed]
aws_eks_node_group.CustA-HR: Still destroying... [id=HRIT:CustA-HR, 1m30s elapsed]
aws_eks_node_group.CustA-Marketing: Still destroying... [id=Marketing:CustA-Marketing, 1m30s elapsed]
aws_eks_node_group.CustA-IT: Still destroying... [id=HRIT:CustA-IT, 1m30s elapsed]
aws_eks_node_group.CustA-Sales: Still destroying... [id=Sales:CustA-Sales, 1m30s elapsed]
aws_eks_node_group.CustA-HR: Still destroying... [id=HRIT:CustA-HR, 1m40s elapsed]
aws_eks_node_group.CustA-Marketing: Still destroying... [id=Marketing:CustA-Marketing, 1m40s elapsed]
aws_eks_node_group.CustA-IT: Still destroying... [id=HRIT:CustA-IT, 1m40s elapsed]
aws_eks_node_group.CustA-Sales: Still destroying... [id=Sales:CustA-Sales, 1m40s elapsed]
aws_eks_node_group.CustA-HR: Still destroying... [id=HRIT:CustA-HR, 1m50s elapsed]
aws_eks_node_group.CustA-Marketing: Still destroying... [id=Marketing:CustA-Marketing, 1m50s elapsed]
aws_eks_node_group.CustA-IT: Still destroying... [id=HRIT:CustA-IT, 1m50s elapsed]
aws_eks_node_group.CustA-Sales: Still destroying... [id=Sales:CustA-Sales, 1m50s elapsed]
aws_eks_node_group.CustA-HR: Still destroying... [id=HRIT:CustA-HR, 2m0s elapsed]
aws_eks_node_group.CustA-Marketing: Still destroying... [id=Marketing:CustA-Marketing, 2m0s elapsed]
aws_eks_node_group.CustA-IT: Still destroying... [id=HRIT:CustA-IT, 2m0s elapsed]
aws_eks_node_group.CustA-Sales: Still destroying... [id=Sales:CustA-Sales, 2m0s elapsed]
aws_eks_node_group.CustA-HR: Still destroying... [id=HRIT:CustA-HR, 2m10s elapsed]
aws_eks_node_group.CustA-Marketing: Still destroying... [id=Marketing:CustA-Marketing, 2m10s elapsed]
aws_eks_node_group.CustA-IT: Still destroying... [id=HRIT:CustA-IT, 2m10s elapsed]
aws_eks_node_group.CustA-Sales: Still destroying... [id=Sales:CustA-Sales, 2m10s elapsed]
aws_eks_node_group.CustA-HR: Still destroying... [id=HRIT:CustA-HR, 2m20s elapsed]
aws_eks_node_group.CustA-Marketing: Still destroying... [id=Marketing:CustA-Marketing, 2m20s elapsed]
aws_eks_node_group.CustA-IT: Still destroying... [id=HRIT:CustA-IT, 2m20s elapsed]
aws_eks_node_group.CustA-Sales: Still destroying... [id=Sales:CustA-Sales, 2m20s elapsed]
aws_eks_node_group.CustA-HR: Still destroying... [id=HRIT:CustA-HR, 2m30s elapsed]
aws_eks_node_group.CustA-Marketing: Still destroying... [id=Marketing:CustA-Marketing, 2m30s elapsed]
aws_eks_node_group.CustA-IT: Still destroying... [id=HRIT:CustA-IT, 2m30s elapsed]
aws_eks_node_group.CustA-Sales: Still destroying... [id=Sales:CustA-Sales, 2m30s elapsed]
aws_eks_node_group.CustA-HR: Still destroying... [id=HRIT:CustA-HR, 2m40s elapsed]
aws_eks_node_group.CustA-Marketing: Still destroying... [id=Marketing:CustA-Marketing, 2m40s elapsed]
aws_eks_node_group.CustA-IT: Still destroying... [id=HRIT:CustA-IT, 2m40s elapsed]
aws_eks_node_group.CustA-Sales: Still destroying... [id=Sales:CustA-Sales, 2m40s elapsed]
aws_eks_node_group.CustA-HR: Still destroying... [id=HRIT:CustA-HR, 2m50s elapsed]
aws_eks_node_group.CustA-Marketing: Still destroying... [id=Marketing:CustA-Marketing, 2m50s elapsed]
aws_eks_node_group.CustA-IT: Still destroying... [id=HRIT:CustA-IT, 2m50s elapsed]
aws_eks_node_group.CustA-Sales: Still destroying... [id=Sales:CustA-Sales, 2m50s elapsed]
aws_eks_node_group.CustA-HR: Still destroying... [id=HRIT:CustA-HR, 3m0s elapsed]
aws_eks_node_group.CustA-Marketing: Still destroying... [id=Marketing:CustA-Marketing, 3m0s elapsed]
aws_eks_node_group.CustA-IT: Still destroying... [id=HRIT:CustA-IT, 3m0s elapsed]
aws_eks_node_group.CustA-Sales: Still destroying... [id=Sales:CustA-Sales, 3m0s elapsed]
aws_eks_node_group.CustA-HR: Still destroying... [id=HRIT:CustA-HR, 3m10s elapsed]
aws_eks_node_group.CustA-Marketing: Still destroying... [id=Marketing:CustA-Marketing, 3m10s elapsed]
aws_eks_node_group.CustA-IT: Still destroying... [id=HRIT:CustA-IT, 3m10s elapsed]
aws_eks_node_group.CustA-Sales: Still destroying... [id=Sales:CustA-Sales, 3m10s elapsed]
aws_eks_node_group.CustA-Marketing: Destruction complete after 3m19s
aws_iam_role_policy_attachment.Marketing-node-AmazonEC2ContainerRegistryReadOnly: Destroying... [id=terraform-eks-Marketing-node-2021073016173864380000000b]
aws_iam_role_policy_attachment.Marketing-node-AmazonEKSWorkerNodePolicy: Destroying... [id=terraform-eks-Marketing-node-2021073016173864660000000c]
aws_eks_cluster.Marketing: Destroying... [id=Marketing]
aws_iam_role_policy_attachment.Marketing-node-AmazonEKS_CNI_Policy: Destroying... [id=terraform-eks-Marketing-node-2021073016173861290000000a]
aws_iam_role_policy_attachment.Marketing-node-AmazonEKS_CNI_Policy: Destruction complete after 1s
aws_iam_role_policy_attachment.Marketing-node-AmazonEKSWorkerNodePolicy: Destruction complete after 1s
aws_iam_role_policy_attachment.Marketing-node-AmazonEC2ContainerRegistryReadOnly: Destruction complete after 1s
aws_iam_role.Marketing-node: Destroying... [id=terraform-eks-Marketing-node]
aws_eks_node_group.CustA-HR: Still destroying... [id=HRIT:CustA-HR, 3m20s elapsed]
aws_iam_role.Marketing-node: Destruction complete after 2s
aws_eks_node_group.CustA-IT: Still destroying... [id=HRIT:CustA-IT, 3m20s elapsed]
aws_eks_node_group.CustA-Sales: Still destroying... [id=Sales:CustA-Sales, 3m20s elapsed]
aws_eks_cluster.Marketing: Still destroying... [id=Marketing, 10s elapsed]
aws_eks_node_group.CustA-IT: Destruction complete after 3m24s
aws_iam_role_policy_attachment.IT-node-AmazonEC2ContainerRegistryReadOnly: Destroying... [id=terraform-eks-IT-node-2021073016174542760000000d]
aws_iam_role_policy_attachment.IT-node-AmazonEKS_CNI_Policy: Destroying... [id=terraform-eks-IT-node-2021073016174543030000000e]
aws_iam_role_policy_attachment.IT-node-AmazonEKSWorkerNodePolicy: Destroying... [id=terraform-eks-IT-node-2021073016175061740000000f]
aws_eks_node_group.CustA-HR: Still destroying... [id=HRIT:CustA-HR, 3m30s elapsed]
aws_iam_role_policy_attachment.IT-node-AmazonEKS_CNI_Policy: Destruction complete after 1s
aws_iam_role_policy_attachment.IT-node-AmazonEC2ContainerRegistryReadOnly: Destruction complete after 1s
aws_iam_role_policy_attachment.IT-node-AmazonEKSWorkerNodePolicy: Destruction complete after 1s
aws_iam_role.IT-node: Destroying... [id=terraform-eks-IT-node]
aws_iam_role.IT-node: Destruction complete after 2s
aws_eks_node_group.CustA-Sales: Still destroying... [id=Sales:CustA-Sales, 3m30s elapsed]
aws_eks_cluster.Marketing: Still destroying... [id=Marketing, 20s elapsed]
aws_eks_node_group.CustA-HR: Still destroying... [id=HRIT:CustA-HR, 3m40s elapsed]
aws_eks_node_group.CustA-Sales: Still destroying... [id=Sales:CustA-Sales, 3m40s elapsed]
aws_eks_cluster.Marketing: Still destroying... [id=Marketing, 30s elapsed]
aws_eks_node_group.CustA-HR: Still destroying... [id=HRIT:CustA-HR, 3m50s elapsed]
aws_eks_node_group.CustA-Sales: Still destroying... [id=Sales:CustA-Sales, 3m50s elapsed]
aws_eks_cluster.Marketing: Still destroying... [id=Marketing, 40s elapsed]
aws_eks_node_group.CustA-Sales: Still destroying... [id=Sales:CustA-Sales, 4m0s elapsed]
aws_eks_cluster.Marketing: Still destroying... [id=Marketing, 50s elapsed]
aws_eks_node_group.CustA-Sales: Still destroying... [id=Sales:CustA-Sales, 4m10s elapsed]
aws_eks_cluster.Marketing: Still destroying... [id=Marketing, 1m0s elapsed]
aws_eks_node_group.CustA-Sales: Still destroying... [id=Sales:CustA-Sales, 4m20s elapsed]
aws_eks_cluster.Marketing: Still destroying... [id=Marketing, 1m10s elapsed]
aws_eks_node_group.CustA-Sales: Destruction complete after 4m30s
aws_iam_role_policy_attachment.Sales-node-AmazonEKS_CNI_Policy: Destroying... [id=terraform-eks-Sales-node-20210730161737943500000006]
aws_iam_role_policy_attachment.Sales-node-AmazonEKSWorkerNodePolicy: Destroying... [id=terraform-eks-Sales-node-20210730161737934000000005]
aws_iam_role_policy_attachment.Sales-node-AmazonEC2ContainerRegistryReadOnly: Destroying... [id=terraform-eks-Sales-node-20210730161737944100000008]
aws_eks_cluster.Sales: Destroying... [id=Sales]
aws_iam_role_policy_attachment.Sales-node-AmazonEKS_CNI_Policy: Destruction complete after 1s
aws_iam_role_policy_attachment.Sales-node-AmazonEC2ContainerRegistryReadOnly: Destruction complete after 1s
aws_iam_role_policy_attachment.Sales-node-AmazonEKSWorkerNodePolicy: Destruction complete after 2s
aws_iam_role.Sales-node: Destroying... [id=terraform-eks-Sales-node]
aws_eks_cluster.Marketing: Still destroying... [id=Marketing, 1m20s elapsed]
aws_iam_role.Sales-node: Destruction complete after 2s
aws_eks_cluster.Sales: Still destroying... [id=Sales, 10s elapsed]
aws_eks_cluster.Marketing: Destruction complete after 1m28s
aws_iam_role_policy_attachment.Marketing-cluster-AmazonEKSServicePolicy: Destroying... [id=terraform-eks-Marketing-cluster-20210730161732278700000002]
aws_iam_role_policy_attachment.Marketing-cluster-AmazonEKSClusterPolicy: Destroying... [id=terraform-eks-Marketing-cluster-20210730161732274700000001]
aws_security_group.Marketing-cluster: Destroying... [id=sg-0b666285a10238f62]
aws_iam_role_policy_attachment.Marketing-cluster-AmazonEKSServicePolicy: Destruction complete after 1s
aws_iam_role_policy_attachment.Marketing-cluster-AmazonEKSClusterPolicy: Destruction complete after 1s
aws_iam_role.Marketing-cluster: Destroying... [id=terraform-eks-Marketing-cluster]
aws_security_group.Marketing-cluster: Destruction complete after 2s
aws_iam_role.Marketing-cluster: Destruction complete after 3s
aws_eks_cluster.Sales: Still destroying... [id=Sales, 20s elapsed]
aws_eks_cluster.Sales: Still destroying... [id=Sales, 30s elapsed]
aws_eks_cluster.Sales: Still destroying... [id=Sales, 40s elapsed]
aws_eks_cluster.Sales: Still destroying... [id=Sales, 50s elapsed]
aws_eks_cluster.Sales: Still destroying... [id=Sales, 1m0s elapsed]
aws_eks_cluster.Sales: Still destroying... [id=Sales, 1m10s elapsed]
aws_eks_cluster.Sales: Still destroying... [id=Sales, 1m20s elapsed]
aws_eks_cluster.Sales: Still destroying... [id=Sales, 1m30s elapsed]
aws_eks_cluster.Sales: Still destroying... [id=Sales, 1m40s elapsed]
aws_eks_cluster.Sales: Still destroying... [id=Sales, 1m50s elapsed]
aws_eks_cluster.Sales: Still destroying... [id=Sales, 2m0s elapsed]
aws_eks_cluster.Sales: Still destroying... [id=Sales, 2m10s elapsed]
aws_eks_cluster.Sales: Still destroying... [id=Sales, 2m20s elapsed]
aws_eks_cluster.Sales: Destruction complete after 2m22s
aws_iam_role_policy_attachment.Sales-cluster-AmazonEKSServicePolicy: Destroying... [id=terraform-eks-Sales-cluster-20210730161732303600000004]
aws_iam_role_policy_attachment.Sales-cluster-AmazonEKSClusterPolicy: Destroying... [id=terraform-eks-Sales-cluster-20210730161732301500000003]
aws_security_group.Sales-cluster: Destroying... [id=sg-086007437e95e69da]
aws_iam_role_policy_attachment.Sales-cluster-AmazonEKSServicePolicy: Destruction complete after 0s
aws_iam_role_policy_attachment.Sales-cluster-AmazonEKSClusterPolicy: Destruction complete after 0s
aws_iam_role.Sales-cluster: Destroying... [id=terraform-eks-Sales-cluster]
aws_security_group.Sales-cluster: Destruction complete after 1s
aws_iam_role.Sales-cluster: Destruction complete after 3s

Error: error waiting for EKS Node Group (HRIT:CustA-HR) deletion: Ec2SecurityGroupDeletionFailure: DependencyViolation - resource has a dependent object. Resource IDs: [sg-0f2eef7aff2bb7765]

@breathingdust breathingdust added bug Addresses a defect in current functionality. and removed needs-triage Waiting for first response or review from a maintainer. labels Aug 27, 2021
@github-actions github-actions bot added this to the v4.29.0 milestone Aug 31, 2022
@github-actions
Copy link

github-actions bot commented Sep 2, 2022

This functionality has been released in v4.29.0 of the Terraform AWS Provider. Please see the Terraform documentation on provider versioning or reach out if you need any assistance upgrading.

For further feature requests or bug reports with this functionality, please create a new GitHub issue following the template. Thank you!

@github-actions
Copy link

github-actions bot commented Oct 3, 2022

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Oct 3, 2022
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
bug Addresses a defect in current functionality. service/ec2 Issues and PRs that pertain to the ec2 service. service/eks Issues and PRs that pertain to the eks service. service/iam Issues and PRs that pertain to the iam service.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants