Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cni custom networking #316

Closed

Conversation

frimik
Copy link

@frimik frimik commented Mar 22, 2019

PR o'clock

Description

Favour some simplicity in the kubectl manifest applications. Terraform properly fails its run now unless kubectl explicitly succeeds.
Favour an asset_dir parameter over the config_output_path which has been explicitly removed. The asset dir will include manifests and the kubeconfig file.

Previously, these were only temporarily written to disk - in the module path. Now it will be properly output in the run, it will be the operators job to decide what to do with the files.

The write_aws_auth_config and write_kubeconfig parameters have been removed.

Checklist

  • terraform fmt and terraform validate both work from the root and examples/eks_test_fixture directories (look in CI for an example)
  • Tests for the changes have been added and passing (for bug fixes/features)
  • Test results are pasted in this PR (in lieu of CI) - Test kitchen fails due to it attempts terraform destroy on non-existing infra. Destroying an empty Terraform state can fail newcontext-oss/kitchen-terraform#271
  • I've added my change to CHANGELOG.md
  • Any breaking changes are highlighted above

@max-rocket-internet
Copy link
Contributor

I see your idea @frimik, and it looks cool, but I don't think this is something we will merge. It's very complex and managing CNI related resources, or really any resources inside k8s, is not something we want to do in this module: #99

It looks like you can implement this outside the module quite easily though, right? There's no outputs missing that would stop you doing that?

@frimik
Copy link
Author

frimik commented Mar 22, 2019

@max-rocket-internet I understand your position exactly. Because I'm of the same idea. Spawn cluster, then have next stage which applies "components".

In this case though, it's so much tied to the cluster inception itself (CIDR, sec groups, subnets). As well as not being NEW manifests, but rather patching the existing one (aws-node).

Plus there's already kubectl apply in places. Though, I didn't look at them to compare them conceptually to what I was doing here.

I did create two implementations of this, one which is a "wrapper" module, and then this because wrapper modules in Terraform is simply awful to manage today.

Module "passthrough" could have been made bareable if tf supported undef.

Enough ranting about that though 😄, like any of the other parts in this pull req?

@frimik
Copy link
Author

frimik commented Mar 22, 2019

Obviously we're now also impatiently waiting for the official release of the vpc cni to actually include the new label code according to the Custom CNI Networking documentation ...

@frimik frimik closed this Mar 29, 2019
@max-rocket-internet
Copy link
Contributor

I did create two implementations of this, one which is a "wrapper" module

I think this is the best solution. We do this for many TF modules. For example, we wrap public RDS and Aurora modules in order to get cloudwatch alarms and route53 records all done in one module.

@morganchristiansson
Copy link
Contributor

This PR would be very useful to automatically create ENIConfig resources in kubernetes for each subnet used.

Not sure how it could be done as a wrapper. Can be done as a secondary module perhaps...

This is to support CNI Custom networking per Amazon EKS doco https://docs.aws.amazon.com/eks/latest/userguide/cni-custom-network.html

@morganchristiansson
Copy link
Contributor

morganchristiansson commented Jul 4, 2019

Actually I have discovered that ENIConfig resources are not needed if setting AWS_VPC_K8S_CNI_CUSTOM_NETWORK_CFG=false

Pods are simply created with same subnet and sg as worker eni. Just start workers in different subnets and use node-label and nodeSelector to schedule pods on them.

Then this PR to generate ENIConfig with subnet and sg from terraform output is not needed! Much better.

@github-actions
Copy link

I'm going to lock this pull request because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems related to this change, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Nov 22, 2022
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants