Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

bug: vpc-cni error when using this module to install ebs-csi-driver #4

Open
thetoolsmith opened this issue Jun 21, 2023 · 1 comment
Labels
bug Something isn't working

Comments

@thetoolsmith
Copy link

Summary

I have tried to used this module in terraform and get the following error and on freshly provisioned eks clusters.

│ Error: unexpected EKS Add-On (identity-tfhello-dev-X-use1:vpc-cni) state returned during creation: unexpected state 'CREATE_FAILED', wanted target 'ACTIVE'. last error: 1 error occurred:
│ 	* : ConfigurationConflict: Conflicts found when trying to apply. Will not continue due to resolve conflicts mode. Conflicts:
│ ServiceAccount aws-node - .metadata.labels.app.kubernetes.io/version
│ ClusterRole.rbac.authorization.k8s.io aws-node - .rules
│ ClusterRole.rbac.authorization.k8s.io aws-node - .metadata.labels.app.kubernetes.io/version
│ ClusterRoleBinding.rbac.authorization.k8s.io aws-node - .metadata.labels.app.kubernetes.io/version
│ DaemonSet.apps aws-node - .metadata.labels.app.kubernetes.io/version
│ DaemonSet.apps aws-node - .spec.template.spec.containers[name="aws-node"].image
│ DaemonSet.apps aws-node - .spec.template.spec.initContainers[name="aws-vpc-cni-init"].image
│ 
│ 
│ [WARNING] Running terraform apply again will remove the kubernetes add-on and attempt to create it again effectively purging previous add-on configuration
│ 
│   with module.eks_us-east-1.module.eks["X"].aws_eks_addon.this["vpc-cni"],
│   on .terraform/modules/eks_us-east-1.eks/main.tf line 382, in resource "aws_eks_addon" "this":
│  382: resource "aws_eks_addon" "this" {
│ 

My module configuration looks like this

module "eks-ebs-csi-driver" {
  for_each = module.eks

  source  = "lablabs/eks-ebs-csi-driver/aws"
  version = "0.1.0"

  cluster_identity_oidc_issuer     = each.value.cluster_oidc_issuer_url
  cluster_identity_oidc_issuer_arn = each.value.oidc_provider_arn
  helm_release_name                = join("-", [lower(each.value.cluster_name), "ebs-csi"])
  service_account_name             = join("-", [lower(each.value.cluster_name), "ebs-csi"])
  irsa_role_name_prefix            = each.value.cluster_name
  irsa_tags                        = var.tags
}

Issue Type

Bug Report

Terraform Version

Terraform v1.3.0
on darwin_arm64
+ provider registry.terraform.io/hashicorp/aws v4.67.0
+ provider registry.terraform.io/hashicorp/cloudinit v2.3.2
+ provider registry.terraform.io/hashicorp/helm v2.10.1
+ provider registry.terraform.io/hashicorp/kubernetes v2.21.1
+ provider registry.terraform.io/hashicorp/random v3.5.1
+ provider registry.terraform.io/hashicorp/time v0.9.1
+ provider registry.terraform.io/hashicorp/tls v4.0.4

Steps to Reproduce

module "eks-ebs-csi-driver" {
  for_each = module.eks

  source  = "lablabs/eks-ebs-csi-driver/aws"
  version = "0.1.0"

  cluster_identity_oidc_issuer     = each.value.cluster_oidc_issuer_url
  cluster_identity_oidc_issuer_arn = each.value.oidc_provider_arn
  helm_release_name                = join("-", [lower(each.value.cluster_name), "ebs-csi"])
  service_account_name             = join("-", [lower(each.value.cluster_name), "ebs-csi"])
  irsa_role_name_prefix            = each.value.cluster_name
  irsa_tags                        = var.tags
}

Expected Results

I expected the cluster to be provisioned without error and with the ebs-csi-driver addon installed with a new role created.

Actual Results

│ Error: unexpected EKS Add-On (identity-tfhello-dev-X-use1:vpc-cni) state returned during creation: unexpected state 'CREATE_FAILED', wanted target 'ACTIVE'. last error: 1 error occurred:
│ 	* : ConfigurationConflict: Conflicts found when trying to apply. Will not continue due to resolve conflicts mode. Conflicts:
│ ServiceAccount aws-node - .metadata.labels.app.kubernetes.io/version
│ ClusterRole.rbac.authorization.k8s.io aws-node - .rules
│ ClusterRole.rbac.authorization.k8s.io aws-node - .metadata.labels.app.kubernetes.io/version
│ ClusterRoleBinding.rbac.authorization.k8s.io aws-node - .metadata.labels.app.kubernetes.io/version
│ DaemonSet.apps aws-node - .metadata.labels.app.kubernetes.io/version
│ DaemonSet.apps aws-node - .spec.template.spec.containers[name="aws-node"].image
│ DaemonSet.apps aws-node - .spec.template.spec.initContainers[name="aws-vpc-cni-init"].image


│ [WARNING] Running terraform apply again will remove the kubernetes add-on and attempt to create it again effectively purging previous add-on configuration

│   with module.eks_us-east-1.module.eks["X"].aws_eks_addon.this["vpc-cni"],
│   on .terraform/modules/eks_us-east-1.eks/main.tf line 382, in resource "aws_eks_addon" "this":
│  382: resource "aws_eks_addon" "this" {
@thetoolsmith thetoolsmith added the bug Something isn't working label Jun 21, 2023
@jaygridley
Copy link
Member

Hello @thetoolsmith,

This seems to be an issue with your cluster and the aws-cni addon.

Our module does not leverage the AWS EKS addon system to install EBS CSI driver. It rather uses Helm or ArgoCD to install the driver via Helm chart https://github.com/kubernetes-sigs/aws-ebs-csi-driver/tree/master/charts/aws-ebs-csi-driver.

Please, let me know if you are still facing the issue once you remove our module from your codebase.

Thanks

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants