-
-
Notifications
You must be signed in to change notification settings - Fork 4.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support CNI Custom Networking #519
Comments
Related: #316 |
Hey @dippynark 👋 Essentially, we try to resist requests like this one because:
You can see some other examples referenced in this issue: #99 Please convince me otherwise though 😄 |
I feel this is different to what's discussed in #99 as it's not functionality you could add as a wrapper as it requires the module to know about the ordering requirements - our plan is to maintain a fork with this ability, definitely understand the desire to keep complexity at a minimum here. |
I don't just mean discussion in that issue, I also mean in all the referenced issues.
I know but look at all the other requests we get 🙂
What about if a dependency is created like this: Maybe we could make that dependency in an a nice and elegant way that would enable you to run some I think what we don't want is to tie this module in any way to release process of the CNI or any other unrelated things. For example getting PRs every time there's a new release or setting in the CNI. |
I would really like this feature as well, but maybe it would be better to pressure amazon to allow cni variables to be defined at cluster creation time instead of after cluster creation. It seems hacky for aws to suggest patching then terminating instances in their eks workshop Then it would be a simple update. |
Closing. You can use the helm chart: https://github.com/aws/eks-charts/tree/master/stable/aws-vpc-cni Or do something outside of this module. |
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further. |
I'm submitting a...
to add the ability to configure CNI custom networking
What is the current behavior?
Currently the module does not allow custom k8s configuration (like the auth config) to be applied before the worker nodes are created.
For CNI custom networking to work on a particular node, it is necessary for the
aws-node
daemonset to be patched (e.g.AWS_VPC_K8S_CNI_CUSTOM_NETWORK_CFG
env var set), ENIConfig applied and the kubelet parameters (--node-labels
) modified before any Pod not using the host's networking namespace is scheduled to it.Currently it's possible for some pods not using host networking (e.g. coredns applied by EKS) to land on a node before the above has happened, so nodes can come up in different states (i.e. some using the node's subnet for Pod IPs, some using a secondary subnet and some using both) depending on where these Pods land. Problem nodes (i.e. any using the node's subnet for Pod IPs) then need to be terminated.
What's the expected behavior?
To allow for CNI custom networking to be configurable in the module and to order the creation of the worker nodes so that the necessary steps happen before Pods are scheduled.
Are you able to fix this problem and submit a PR? Link here if you have already.
We (me and others) would be keen to submit a fix, but this is not yet done. Our proposed solution would be to do something similar to how the aws auth config is applied, but wanted to gather feedback as to how narrow the aim of this PR should be (i.e. should it aim to just address the CNI specific stuff, or should arbitrary resources be allowed).
I feel the former (just the CNI config) would be best as most configuration doesn't have the same race condition as described above.
Any other relevant info
There may be a recommended way of addressing this problem in the current state of module, but have not come across it yet.
Adding the
--node-labels
flag to the kubelet is not an issue as it can be solved already by modifying thepre_userdata
, but just included it here for completeness.The text was updated successfully, but these errors were encountered: