Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Investigate moving creation of all k8s resources to the kubernetes terraform #221

Closed
bmonkman opened this issue Aug 18, 2021 · 4 comments
Closed

Comments

@bmonkman
Copy link
Contributor

At the moment there are a few user-related things including the aws-auth configmap that get created in the infrastructure terraform, which is in the same state that creates the EKS cluster itself. This is not recommended by the provider - see this note.
This occasionally causes weird issues like this message when trying to plan:

Error: Get "http://localhost/api/v1/namespaces/kube-system/configmaps/aws-auth": dial tcp [::1]:80: connect: connection refused

It's probably a good idea to move everything that uses the kubernetes provider over to the kubernetes terraform but unfortunately this would require basically copying the upstream EKS module's code that's responsible for creating the aws-auth configmap, and then disabling the configmap creation in the module.

Investigate how much effort this would be.

@edmondop
Copy link

@bmonkman Is this related to commitdev/zero#371 ? It might be interested to address the duplication of the Kubernetes code across both the node and the Go backend by creating a custom Terraform module

@bmonkman
Copy link
Contributor Author

It's not really related to that, no. The problem here is that we are using an external module to create the EKS cluster which has a feature where after it creates the cluster, it creates the aws-auth configmap that is required by EKS to define access to that cluster. The way it does it is very convenient so we were happy to take advantage of it, but there is an issue with the kubernetes terraform provider, such that if you use the provider in the same state where you created the cluster, there can be intermittent issues due to how it resolves dependencies. I think we can safely leave this one for now and gather more data about how common this dependency issue actually is.

@bmonkman
Copy link
Contributor Author

Will be fixed by PR #250

@bmonkman bmonkman moved this from Backlog to In Progress in Zero - Full project view Jan 26, 2022
@bmonkman
Copy link
Contributor Author

Creation of the aws-auth configmap has been moved out of the terraform state that creates the cluster.

Repository owner moved this from In Progress to Done in Zero - Full project view Mar 10, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
Status: Done
Development

No branches or pull requests

2 participants