Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Loadbalancer removed from autoscaling groups when using terraform #9913

Closed
lundbird opened this issue Sep 10, 2020 · 3 comments · Fixed by #9794
Closed

Loadbalancer removed from autoscaling groups when using terraform #9913

lundbird opened this issue Sep 10, 2020 · 3 comments · Fixed by #9794

Comments

@lundbird
Copy link

lundbird commented Sep 10, 2020

1. What kops version are you running? The command kops version, will display
this information.

1.18.1

2. What Kubernetes version are you running? kubectl version will print the
version if a cluster is running or provide the Kubernetes version specified as
a kops flag.

1.16.12

3. What cloud provider are you using?
aws

4. What commands did you run? What is the simplest way to reproduce this issue?
kops update cluster --out=terraform --target=terraform

5. What happened after the commands executed?
Planning the output terraform shows that the masters and the bastion lose access to their load balancer. Applying this config will cause the apiserver to become out of service.

**7. Please provide your cluster manifest.

apiVersion: kops.k8s.io/v1alpha2
kind: Cluster
metadata:
  creationTimestamp: "2017-05-04T18:15:27Z"
  generation: 9
  name: ---
spec:
  api:
    loadBalancer:
      type: Public
  authorization:
    rbac: {}
  channel: stable
  cloudProvider: aws
  configBase: ---
  dnsZone: ---
  docker:
    logDriver: ""
    version: 18.09.9
  etcdClusters:
  - etcdMembers:
    - instanceGroup: master-us-east-1a
      name: a
    - instanceGroup: master-us-east-1b
      name: b
    - instanceGroup: master-us-east-1c
      name: c
    name: main
  - etcdMembers:
    - instanceGroup: master-us-east-1a
      name: a
    - instanceGroup: master-us-east-1b
      name: b
    - instanceGroup: master-us-east-1c
      name: c
    name: events

9. Anything else do we need to know?

The latest docs for the aws terraform provider (https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/autoscaling_attachment) say that you need to add a lifecycle hook to the terraform autoscaling group resource if using the autoscaling_attachment and not defining in-line loadbalancer in the ASG. This is not present on the generated terraform output by kops.

As a temporary workaround I have created a file, asg_override.tf, where I add in the lifecycle hook to the resource as follows for each autoscaling_group resource:

resource "aws_autoscaling_group" "bastions" {
  lifecycle {
    ignore_changes = [load_balancers, target_group_arns]
  }
}

This will overlay with the kops generated resource so that the loadbalancer will not be removed.

@lundbird lundbird changed the title Autoscaling groups remove loadbalancer Loadbalancer removed from autoscaling groups Sep 10, 2020
@rdrgmnzs
Copy link
Contributor

rdrgmnzs commented Sep 11, 2020

Hi @lundbird that’s actually what I’m trying to solve in #9794 however upon further testing the conversion to in-line also causes the load balancer to be detached the first time during the migration.

I’m trying to come up with a way that causes no downtime.

@lundbird lundbird changed the title Loadbalancer removed from autoscaling groups Loadbalancer removed from autoscaling groups when using terraform Sep 29, 2020
@bmelbourne
Copy link
Contributor

@lundbird
this is a possible duplicate of #9891 as well

@karancode
Copy link
Contributor

I am facing this same issue with below versions:

kops: Version 1.18.2 (git-84495481e4)
Terraform v0.12.16
terraform  provider "aws" (hashicorp/aws) 3.21.0.

Using provider version v2.51.0 solves the issue.
Is it fixed in later version of kops ?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants