Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can not set internal annotations to specify load balancer ssl certs #60

Closed
udangel-r7 opened this issue Aug 25, 2017 · 60 comments
Closed
Milestone

Comments

@udangel-r7
Copy link

With the recent change of disallowing internal annotations it is no longer possible to setup internal annotations.

I want to use these internal annotations to provide SSL certificates/arn configuration for the load balancers. Any idea for a workaround?

Affected Resource(s)

  • service

Terraform Configuration Files

"aws_elb_hosted_zone_id" "main" {}

data "aws_acm_certificate" "hello" {
  domain   = "*.hello.com"
  statuses = ["ISSUED"]
}

resource "kubernetes_service" "hello" {
  metadata {
    name = "terraform-hello-integrated-example"

    annotations {
      "service.beta.kubernetes.io/aws-load-balancer-ssl-cert" = "${data.aws_acm_certificate.hello.arn}"
    }
  }

  spec {
    selector {
      app  = "hello"
    }

    port {
      port             = 80
      target_port = 80
    }

    type = "LoadBalancer"
  }
}

resource "aws_route53_record" "hello" {
  zone_id = "HELLO"
  name    = "hello"
  type    = "A"

  alias {
    name                   = "${kubernetes_service.hello.load_balancer_ingress.0.hostname}"
    zone_id                = "${data.aws_elb_hosted_zone_id.main.id}"
    evaluate_target_health = true
  }
}
@radeksimko
Copy link
Member

Hi @udangel-r7
I'm sorry for any inconvenience caused by this.

Allowing users to specify internal (kubernetes.io) annotations is something I plan to look into eventually, but it's non trivial for us to support. See full explanation at #50 (comment)

@fbsb
Copy link

fbsb commented Sep 5, 2017

Hi, I am also affected by this change. Is there any known workaround to configure SSL for the load balancers on AWS?

@frankthelen
Copy link

The same problem exists when your LB shall not have an external IP, e.g., in development environments with only internal IPs. I'd be happy if this issue can be solved. Thank you.

annotations {
   "service.beta.kubernetes.io/aws-load-balancer-internal" = "0.0.0.0/0"
}

@jsullivanLI
Copy link

Is there a previous version of Terraform where internal annotations are allowed?

@pdecat
Copy link
Contributor

pdecat commented Oct 30, 2017

Hi, same issue for me, as I need to specify the well known pv.beta.kubernetes.io/gid annotation on my persistent volumes.

I've read #50 (comment) and understand that the root issue from the terraform point of view is more with dynamic kubernetes annotations that potentially change all the time like those listed at https://kubernetes.io/docs/reference/labels-annotations-taints/ :

  • beta.kubernetes.io/arch
  • beta.kubernetes.io/os
  • kubernetes.io/hostname
  • beta.kubernetes.io/instance-type
  • failure-domain.beta.kubernetes.io/region
  • failure-domain.beta.kubernetes.io/zone

As maintaining the list of such annotations exhaustively can seem overwhelming if not impossible as time goes, what about allowing to whitelist annotations by configuration?

I guess doing so at the provider level would be practical:

provider "kubernetes" {
  internal_annotations_whitelist = [
    "pv.beta.kubernetes.io/gid",
    "service.beta.kubernetes.io/aws-load-balancer-internal",
    "service.beta.kubernetes.io/aws-load-balancer-ssl-cert",
 ]
}

What do you think?

PS: I'll use the pod security context's fs_group field to resolve for my current FS GID issue, but I believe whitelisting will come in handy anyway.

Edit: #325 is a better alternative as it does not require the user to actively maintain the whitelist.

@bobbytables
Copy link

This is pretty annoying, why enforce this at all? Doesn't kubernetes API prevent this?

@Phylu
Copy link

Phylu commented Feb 26, 2018

Actually there is a good list regarding the aws specific annotations here: https://gist.github.com/mgoodness/1a2926f3b02d8e8149c224d25cc57dc1

I think a whitelist would be the best solution currently.

@bitglue
Copy link

bitglue commented Apr 2, 2018

Further broken use cases include assigning an ELB to a service: https://gist.github.com/mgoodness/1a2926f3b02d8e8149c224d25cc57dc1

And allowing syscalls in pods: https://kubernetes.io/docs/tasks/administer-cluster/sysctl-cluster/#setting-sysctls-for-a-pod

@offlinehacker
Copy link

By my opinion this check is unneeded, it's up to user to not use these annotations, and use them when he knows what he is doing.

@r-moiseev
Copy link

Same here, I am trying to make storage class to be default. And I need storageclass.kubernetes.io/is-default-class annotation.

Using of this annotation is not restricted by Kubernetes and documented there https://kubernetes.io/docs/tasks/administer-cluster/change-default-storage-class/

@kattelus
Copy link

kattelus commented May 14, 2018

Same problem with azure internal load balancer:
https://docs.microsoft.com/en-us/azure/aks/internal-lb
service.beta.kubernetes.io/azure-load-balancer-internal: "true"

@kattelus
Copy link

Any possibility allow annotation usage e.g. change this check to just output warning or something? Now this means that everyone using e.g. internal load balancers in azure or aws or otherwise defining additional parameters to those fail? Or is there any easy way to make those work in other ways?

@bitglue
Copy link

bitglue commented May 17, 2018

Please read #50 (comment), which explains why this check exists.

@Phylu
Copy link

Phylu commented May 18, 2018

I think that especially because of the reasoning behind #50 this is a big issue for many of us.

Unfortunately, the usage after this change showed that some tasks that were possible before are not possible anymore. For those of us who rely on these features (internal load balancers, aws certificates, ...) this makes those things really hard do achieve.

Therefore, I would be really interested in feasible solution such as some whitelisting of certain annotations or similar.

@chreichert
Copy link

Same problem here, when having the external IP for a loadbalancer inside a different resource group on azure:
The annotation needed for this is not allowed:
service.beta.kubernetes.io/azure-load-balancer-resource-group: RG_SANDBOX_006_QA_PROD_PublicIP_Pool

Whitelisting these necessary annotations would be highly appreciated.
The disallowing renders Terraform useless for us atm.

@TechyMatt
Copy link

TechyMatt commented Jun 12, 2018

I'm also now hit with the bug and having the list of approved annotations would seriously help. I'm not a developer and had never even ventured into Go before, but managed to piece together the following extension to the existing function validateAnnotations in https://github.com/terraform-providers/terraform-provider-kubernetes/blob/master/kubernetes/validators.go.

func validateAnnotations(value interface{}, key string) (ws []string, es []error) {
	m := value.(map[string]interface{})
	for k, _ := range m {
		errors := utilValidation.IsQualifiedName(strings.ToLower(k))
		if len(errors) > 0 {
			for _, e := range errors {
				es = append(es, fmt.Errorf("%s (%q) %s", key, k, e))
			}
		}

		permittedInternalKubAnnotations := map[string]bool {
			"service.beta.kubernetes.io/azure-load-balancer-internal": true,
			"service.beta.kubernetes.io/azure-load-balancer-resource-group": true,
			"service.beta.kubernetes.io/aws-load-balancer-access-log-emit-interval": true,
			"service.beta.kubernetes.io/aws-load-balancer-access-log-enabled": true,
			"service.beta.kubernetes.io/aws-load-balancer-access-log-s3-bucket-name": true,
			"service.beta.kubernetes.io/aws-load-balancer-access-log-s3-bucket-prefix": true,
			"service.beta.kubernetes.io/aws-load-balancer-backend-protocol": true,
			"service.beta.kubernetes.io/aws-load-balancer-connection-draining-enabled": true,
			"service.beta.kubernetes.io/aws-load-balancer-connection-draining-timeout": true,
			"service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout": true,
			"service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled": true,
			"service.beta.kubernetes.io/aws-load-balancer-internal": true,
			"service.beta.kubernetes.io/aws-load-balancer-proxy-protocol": true,
			"service.beta.kubernetes.io/aws-load-balancer-ssl-cert": true,
			"service.beta.kubernetes.io/aws-load-balancer-ssl-ports": true,
		}
		
		val := permittedInternalKubAnnotations[k];
	
		if isInternalKey(k) && val != true {
			es = append(es, fmt.Errorf("%s: %q is internal Kubernetes annotation", key, k))
		}
	}
	return
}

I've tested the above and it works with my specific use case of needing an internal load balancer in Azure for provisioning a POC. Whilst i understand why the check for internal providers was built in the first place, this is severely impacting the usefulness of the Kubernetes provider without the ability for this exception.

If there is appetite for me to submit a pull request i'm more than happy to but I suspect there is a significantly more efficient way of the above check :). I'll also have to figure out the part of that stops the attempt to re-enable the load balancer every time as this is what happens when I re-run:

  ~ kubernetes_service.nginx
      metadata.0.annotations.%:                                                       "0" => "1"
      metadata.0.annotations.service.beta.kubernetes.io/azure-load-balancer-internal: "" => "true"

@juhan88
Copy link

juhan88 commented Oct 26, 2018

Are there plan to support annotations? Specifically creating kubernetes service's with internal private elbs

@mbarrien
Copy link
Contributor

mbarrien commented Oct 30, 2018

Here's another set of "internal" kubernetes.io annotations that are disallowed: https://github.com/kubernetes-sigs/aws-alb-ingress-controller/blob/master/docs/ingress-resources.md#annotations. This ingress controller is absolutely unusable without whitelisting the large list.

C'mon guys, the examples where this is blocking people for legitimate usage are far outweighing and far outnumbering the original reasons for blocking it, and severely degrades the usability of this provider. Please revert #50 or reduce its scope.

@andyvanosdale
Copy link
Contributor

Now that development is starting back up on this, it would be great to have this revisited. I would love to use this in some different projects, but kube2iam used the internal annotations to allow pods to assume an IAM role.

@dh-harald
Copy link
Contributor

dh-harald commented Nov 16, 2018

I'm not a go developer, so I won't open a pull request, but here's my solution for this problem:
dh-harald@d0b48e8

Idea: @pdecat / Solution: @mb290

Here's an example:

provider "kubernetes" {
  internal_annotations_whitelist = [
    "storageclass.kubernetes.io/is-default-class",
  ]
}

resource "kubernetes_storage_class" "gp2" {
  metadata {
    name = "gp2"

    annotations {
      "storageclass.kubernetes.io/is-default-class" = "true"
    }
  }
  storage_provisioner = "kubernetes.io/aws-ebs"
  parameters {
    type      = "gp2"
    encrypted = "true"
  }
}

I don't know is it known, but if you're using true instead of "true", terraform translates it into 1, and it won't work

@jakubigla
Copy link

jakubigla commented Dec 10, 2018

Haha, lack of support of internal annotations just made this and couple of other Kubernetes resources useless. Especially in enterprise. Thanks for removing this. Now automation of this just got much complicated.

@dh-harald
Copy link
Contributor

dh-harald commented Dec 10, 2018

@jakubigla For similar reason, we've started using https://github.com/sl1pm4t/terraform-provider-kubernetes instead of the official... It has no problem with internal annotations, there's a lot of beta APIs are supported, which we're using (ingress, clusterrole, etc)... Worth a try instead of using (for example) dumb hacks with null_provider
I though, that the official provider could be usable if we're helping them, but it looks nothing has changed... So I'm very disappointed

@holms
Copy link

holms commented Apr 8, 2019

So is there's any progress with this? I mean it's 2019 and April already hello? @pdecat @AlexBoulyga Anyone? Help? Devops can't live without terraform!

@Xyrodileas
Copy link

And here's a dodgy workaround (partial implementation - it provisions, but not updates or destroys). YMMV.

resource "null_resource" "eks_service_metadata_annotations" {
  provisioner "local-exec" {
    command = <<EOF
kubectl annotate --overwrite --namespace ${local.eks_namespace} \
  service ${kubernetes_service.sample.metadata.0.name} \
  'service.beta.kubernetes.io/aws-load-balancer-backend-protocol=http' \
  'service.beta.kubernetes.io/aws-load-balancer-ssl-cert=${local.acm_certificate_arn}'
EOF
  }
}

Thanks a lot, I would prefer that the official provider support k8s annotation, but ¯_(ツ)_/¯

@rpatrick00
Copy link

@Xyrodileas, most of us have gone off the reservation and built our own version of the provider to get around this problem. I created PR #244 for the main provider to address the problem in the way suggested by one of the contributors. Others, like sl1pm4t / terraform-provider-kubernetes, have forked the code since Hashicorp seems uninterested in making this provider capable of real-world deployments. Either way, the only real workaround at this point is to either build your own custom provider or use the local-exec hack to run kubectl.

@erick-thompson
Copy link

Is this still stalled? I'm trying to use External DNS, and I've run into this issue. It's very annoying. I don't like this idea of terraform making decisions about what I can and can't do in k8s. I've run into other issues like this (not being able to set automountServiceAccountToken on deployments).

Is there a workaround for this particular issue?

@tdmalone
Copy link
Contributor

tdmalone commented Apr 16, 2019

@erick-thompson Workaround is currently to shell out to kubectl using a null_resource and local-exec provisioner (see a few comments up for a simple example). Fix appears to not be coming anytime soon, I think likely not to happen until well after Terraform 0.12 launches.

Regarding automountServiceAccountToken, there is a potential workaround here.

Alternatively there's also a number of forks of the provider, such as this one, which appear to be a good option (I haven't gone down this path just yet). Or of course you could also build a provider locally based on any of the PRs that have been submitted to solve this issue.

@erick-thompson
Copy link

I ended up using that workaround for the service account token, but I wish I had those hours back, when I was trying to debug what was going on and how to fix it.

For this issue, I'll likely just use terraform to create the yaml file and apply it later. The problem is that I provision the cluster (in my case AKS) from the same terraform as I wanted to configure the service for, so I can't use local-exec unless I do something to ensure that the kubectl config is set up correctly.

@tdmalone
Copy link
Contributor

but I wish I had those hours back, when I was trying to debug what was going on and how to fix it.

Yeah, I hear you. It's one thing to make questionable design decisions; it's another thing entirely to not document them!

I can't use local-exec unless I do something to ensure that the kubectl config is set up correctly.

I'm on EKS so I'm not sure what's available for AKS, but I'm using aws eks update-kubeconfig --name "my-cluster-name" && kubectl apply ... in my local-exec. I'm not sure if Azure has a similar tool or automated way to configure kubeconfig.

@erick-thompson
Copy link

I took a similar approach and used "az aks get-credential". It works well enough, but I find I have to be a lot more explicit about my dependencies.

@dayglojesus
Copy link

Plan works, apply produces this...

Error: kubernetes_service.ingress_nginx: metadata.0.annotations: "service.beta.kubernetes.io/aws-load-balancer-type" is internal Kubernetes annotation

@tdmalone
Copy link
Contributor

This should now be fixed by the merge of #325

@selslack
Copy link

Can we have a release tagged with the fix included, please?

@alexsomesan
Copy link
Member

alexsomesan commented May 17, 2019 via email

@tdmalone
Copy link
Contributor

This has now been released in 1.7.0: https://github.com/terraform-providers/terraform-provider-kubernetes/blob/master/CHANGELOG.md#170-may-22-2019
🎉 🚀

@bobbytables
Copy link

bobbytables commented May 23, 2019 via email

@andyvanosdale
Copy link
Contributor

andyvanosdale commented May 23, 2019

Thank you @tdmalone for taking the torch on this provider!

@andoneve
Copy link

andoneve commented Jun 12, 2019

I am still getting an error - maybe my syntax is incorrect?

* provider.kubernetes: version = "~> 1.7"

resource "kubernetes_secret" "my-secret" {
  metadata {
    name = "my-secret"
    annotations {
      kubernetes.io/service-account.name = "my-account"
    }
  }
  type = "kubernetes.io/service-account-token"
}

Error:

Error: Argument or block definition required

  on x.tf line x, in resource "kubernetes_secret" "my-secret":
  x:       kubernetes.io/service-account = "my-account"

An argument or block definition is required here. To set an argument, use the
equals sign "=" to introduce the argument value.

In the error message kubernetes is underlined, as if it parses it as a keyword.

@dh-harald
Copy link
Contributor

Can you try with quotes?

resource "kubernetes_secret" "my-secret" {
  metadata {
    name = "my-secret"
    annotations {
      "kubernetes.io/service-account" = "my-account"
    }
  }
  type = "kubernetes.io/service-account-token"
}

@andoneve
Copy link

I tried before with quotes and I got:

Error: Invalid argument name

  on xx.tf line 17, in resource "kubernetes_secret" "my-token":
  x:       "kubernetes.io/service-account" = "my-account"

Argument names must not be quoted.

@pdecat
Copy link
Contributor

pdecat commented Jun 12, 2019

With terraform 0.12+, you need to specify annotations as a map and with an equal sign:

resource "kubernetes_secret" "my-secret" {
  metadata {
    name = "my-secret"
    annotations = {
      "kubernetes.io/service-account.name" = "my-account"
    }
  }
  type = "kubernetes.io/service-account-token"
}

@andoneve
Copy link

andoneve commented Jun 12, 2019

With terraform 0.12+, you need to specify annotations as a map and with an equal sign:

resource "kubernetes_secret" "my-secret" {
  metadata {
    name = "my-secret"
    annotations = {
      "kubernetes.io/service-account" = "my-account"
    }
  }
  type = "kubernetes.io/service-account-token"
}

This worked! Thank you. I was losing my mind. I'm new to Terraform - where could I have found this in reference docs? For this specific issue, neither https://www.terraform.io/docs/providers/kubernetes/r/secret.html#annotations nor https://www.terraform.io/docs/configuration/syntax.html were that helpful.

p.s. for posterity's sake, the correct annotation is "kubernetes.io/service-account.name" - just in case someone ends up reading this comment in more detail. Changed in my comment above.

@pdecat
Copy link
Contributor

pdecat commented Jun 12, 2019

where could I have found this in reference docs? For this specific issue, https://www.terraform.io/docs/providers/kubernetes/r/secret.html#annotations nor https://www.terraform.io/docs/configuration/syntax.html were that helpful.

That specific use case is missing from the docs.
The most extensive source for examples are the acceptance test cases, e.g. https://github.com/terraform-providers/terraform-provider-kubernetes/blob/v1.7.0/kubernetes/resource_kubernetes_secret_test.go#L342
Maybe they could be extracted and put somewhere more easily discoverable.

Edit: I've noticed a few examples were outdated for terraform 0.12, working on a PR.

@tdmalone
Copy link
Contributor

This ticket should probably be closed - as the original issue is now resolved :)
Might need a new ticket if the docs aren't right for 0.12 though.

@pdecat
Copy link
Contributor

pdecat commented Jun 13, 2019

+1 to close this issue.

The docs should be fine now: #501

@alexsomesan
Copy link
Member

Closed

@NZenitram
Copy link

This still isn't working for me with an AWS NLB. I've checked the versions as well.

@ghost ghost locked and limited conversation to collaborators Apr 21, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging a pull request may close this issue.