-
Notifications
You must be signed in to change notification settings - Fork 561
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Unable to mount PV when EKS Node is Encrypted #574
Comments
Apologies - this might not be a bug and might not have anything to do with the EFS Driver, but it appears to be the only thing affected by the change... It could possibly be something to do with the AWS CNI Driver...! |
@geoffo-dev Did you get it working ? i also have same issue. |
Hey @sharmavijay86 - so sadly not... I havent had a chance to look at it over the last couple of days, but the need is becoming more urgent. Unfortunately our policies do not allow us to use unencrypted or AWS images, so it is difficult to test this at the moment, but the only thing I can see it being is an EBS issue at this stage. I am going to try and figure it out - if I do, I will make sure I post on here! Good to know in a way it is not just me!! |
@sharmavijay86 so I think I got to the bottom of it and hopefully this will help anyone else... so it would appear the example in the module is (possibly) wrong. It tells you to attach to the security group in the template. module.eks_cluster.worker_security_group_id However it should be: module.eks_cluster.cluster_primary_security_group_id This is because the ENI interfaces attached to the EFS file stores use this security group as the source and therefore it needs to be attached to the node... this has been bugging me for weeks. |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /close |
@k8s-triage-robot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/kind bug
What happened?
I have attempted to update my node to use encrypted images, rather than the default unencrypted images EKS provides. To do this I have been using the AWS EKS Terraform module and switched to using launch templates as they show in their example.
https://github.com/terraform-aws-modules/terraform-aws-eks/tree/master/examples/launch_templates_with_managed_node_groups
This is the launch template code I am using
I have created a key specifically for the cluster (and included the autoscaling role and cluster IAM role in the permissions), however everytime I update the cluster, all of the pods that use persistent volumes cannot mount them.
I have tried looking through the logs, but been unable to understand why this might be happening - really the only difference that I am adding is the KMS key to the root volume (as far as I can tell).
There are very limited logs of what is going on
What you expected to happen?
Volumes mount as if they would on an unencrypted EKS node
Environment
The text was updated successfully, but these errors were encountered: