-
-
Notifications
You must be signed in to change notification settings - Fork 4.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cni custom networking #316
Conversation
I see your idea @frimik, and it looks cool, but I don't think this is something we will merge. It's very complex and managing CNI related resources, or really any resources inside k8s, is not something we want to do in this module: #99 It looks like you can implement this outside the module quite easily though, right? There's no outputs missing that would stop you doing that? |
@max-rocket-internet I understand your position exactly. Because I'm of the same idea. Spawn cluster, then have next stage which applies "components". In this case though, it's so much tied to the cluster inception itself (CIDR, sec groups, subnets). As well as not being NEW manifests, but rather patching the existing one ( Plus there's already I did create two implementations of this, one which is a "wrapper" module, and then this because wrapper modules in Terraform is simply awful to manage today. Module "passthrough" could have been made bareable if tf supported undef. Enough ranting about that though 😄, like any of the other parts in this pull req? |
Obviously we're now also impatiently waiting for the official release of the vpc cni to actually include the new label code according to the Custom CNI Networking documentation ... |
I think this is the best solution. We do this for many TF modules. For example, we wrap public RDS and Aurora modules in order to get cloudwatch alarms and route53 records all done in one module. |
This PR would be very useful to automatically create ENIConfig resources in kubernetes for each subnet used. Not sure how it could be done as a wrapper. Can be done as a secondary module perhaps... This is to support CNI Custom networking per Amazon EKS doco https://docs.aws.amazon.com/eks/latest/userguide/cni-custom-network.html |
Actually I have discovered that ENIConfig resources are not needed if setting AWS_VPC_K8S_CNI_CUSTOM_NETWORK_CFG=false Pods are simply created with same subnet and sg as worker eni. Just start workers in different subnets and use node-label and nodeSelector to schedule pods on them. Then this PR to generate ENIConfig with subnet and sg from terraform output is not needed! Much better. |
I'm going to lock this pull request because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems related to this change, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further. |
PR o'clock
Description
Favour some simplicity in the kubectl manifest applications. Terraform properly fails its run now unless kubectl explicitly succeeds.
Favour an
asset_dir
parameter over theconfig_output_path
which has been explicitly removed. The asset dir will include manifests and the kubeconfig file.Previously, these were only temporarily written to disk - in the module path. Now it will be properly output in the run, it will be the operators job to decide what to do with the files.
The
write_aws_auth_config
andwrite_kubeconfig
parameters have been removed.Checklist
terraform fmt
andterraform validate
both work from the root andexamples/eks_test_fixture
directories (look in CI for an example)