-
-
Notifications
You must be signed in to change notification settings - Fork 4.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Workloads deployed on a node of a node_groups are unable to make calls to the internet #1089
Comments
Humm. Can you please check if you have a NAT gateway attached to your private subnets ? Can you please share you vpc module configuration ? You can also have a look at the examples/managed_node_groups for a working example. This will probably help you to figure out what's wrong in your deployment. |
I am having the same problem. I found that the |
Can you please elaborate. Which SG rules are missing ? |
I think it because you have an egress rule which allow internet traffic in the worker SG. This is done for the worker SG created by this module. @marcosborges @ScubaDrew can you confirm please. I don't use MNG at all, and when I go through the code, I don't understand why this is opened only now. |
I just tested an internet accès within a managed node group and everything work as expected. I was wondering what do you mean by "unable to make calls to the internet" ? Is it an DNS issue or your DNS resolution is working correctly and you're just having trouble to reach internet ? If you have an DNS issue, I suspect that your core-dns pod are running in your worker groups and your pod from your managed node groups can't reach them. This is because, there are no rule to allow communication between worker groups and managed node groups by default. To do that, you can set |
@barryib I think you are right - the issue is DNS. coredns is not running on the It seems the As I showed above
|
Here is the description of
It means that it's allow communication between pod in worker groups SG and managed node groups SG. MNG use the primary SG (this was introduced in EKS 1.14). |
Got it. I'll add that then ! Thank you. The example does not have it - https://github.com/terraform-aws-modules/terraform-aws-eks/blob/master/examples/managed_node_groups/main.tf -- so DNS wouldn't work there, right? Thanks again |
Oh good catch. Can you please test it and see if it solves your issue or even better confirm that the example you linked is not working as expected and open an PR to update the example/FAQ ? |
Oh sorry. That example works. It's a quite late here ^^ That example work because, you don't have worker groups and managed node groups => so your core DNS pod run in your MNG which already share the same primary SG. This issue comes when you have worker groups and MNG + your core DNS scheduled on one side of your cluster (in your case, on your self-managed worker groups). |
Confirmed: |
Great. Can you please review #1094. |
Use the network public. subnets = module.vpc.public_subnets |
Can you please elaborate ? How using public subnets will open coms between pods scheduled on managed node groups and those on self-managed worker groups. |
cc @ScubaDrew |
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further. |
Hello guys, I just uploaded an eks using the terraform-aws-eks module, at first it was a pleasant experience, the use of the module was very fluid.
I ran into a problem, configured the module to create a group of nodes so that certain types of applications could be deployed in it.
When I deployed the applications in this group of nodes via the node selector my applications are unable to make calls outside the cluster eg curl google.com.
When I remove the node selector and redo the deployment the application goes up to the standard eks job nodes. In these nodes the application can make calls outside the cluster.
What is the current behavior?
Workloads deployed in groups of nodes are unable to make calls outside the cluster
I started by checking the subnets where the ec2 referring to groups of nodes were going up. It was found that they were the same as the nodes in the working group.
To raise the VPC I used the vpc module (terraform-aws-modules / terraform-aws-vpc).
After checking if it was something in the security group, the rules for the groups of nodes are the same for the worker nodes.
I also validated the IAM Role and again they were the same.
I need a light, tip, direction or smoke signal to continue creating my environment
I will be extremely grateful for the help.
The text was updated successfully, but these errors were encountered: