You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
At the moment there are a few user-related things including the aws-auth configmap that get created in the infrastructure terraform, which is in the same state that creates the EKS cluster itself. This is not recommended by the provider - see this note.
This occasionally causes weird issues like this message when trying to plan:
Error: Get "http://localhost/api/v1/namespaces/kube-system/configmaps/aws-auth": dial tcp [::1]:80: connect: connection refused
It's probably a good idea to move everything that uses the kubernetes provider over to the kubernetes terraform but unfortunately this would require basically copying the upstream EKS module's code that's responsible for creating the aws-auth configmap, and then disabling the configmap creation in the module.
Investigate how much effort this would be.
The text was updated successfully, but these errors were encountered:
@bmonkman Is this related to commitdev/zero#371 ? It might be interested to address the duplication of the Kubernetes code across both the node and the Go backend by creating a custom Terraform module
It's not really related to that, no. The problem here is that we are using an external module to create the EKS cluster which has a feature where after it creates the cluster, it creates the aws-auth configmap that is required by EKS to define access to that cluster. The way it does it is very convenient so we were happy to take advantage of it, but there is an issue with the kubernetes terraform provider, such that if you use the provider in the same state where you created the cluster, there can be intermittent issues due to how it resolves dependencies. I think we can safely leave this one for now and gather more data about how common this dependency issue actually is.
At the moment there are a few user-related things including the
aws-auth
configmap that get created in the infrastructure terraform, which is in the same state that creates the EKS cluster itself. This is not recommended by the provider - see this note.This occasionally causes weird issues like this message when trying to plan:
It's probably a good idea to move everything that uses the kubernetes provider over to the kubernetes terraform but unfortunately this would require basically copying the upstream EKS module's code that's responsible for creating the
aws-auth
configmap, and then disabling the configmap creation in the module.Investigate how much effort this would be.
The text was updated successfully, but these errors were encountered: