Unmanaged nodes go into unready status when fargate profile is added #2290
Labels
kind/feature
New feature or request
priority/important-longterm
Important over the long term, but may not be currently staffed and/or may require multiple releases
What happened?
When trying to add fargate to an existing cluster, 1.14, with un-managed nodes. If I create a cluster with unmanaged nodes, everything ok! As soon as I add a fargate profile, the un-managed node goes into unready status. The fargate node is healthy and ready and the pod running on fargate is healthy and ready
What you expected to happen?
That the un-managed node remains in ready status
How to reproduce it?
Create a cluster with a single unmanaged node group using a config file
eksctl create cluster -f cluster-config.yaml
Add fargate profile targeting the default namespace to config file, then run create node group to add the
fargatePodExecutionRoleARN
to the clustereksctl create nodegroup --config-file=cluster-config.yaml
Add the Fargate profile
eksctl create fargateprofile -f cluster-config.yaml
Anything else we need to know?
cluster-config.yaml with redacted information
Versions
Please paste in the output of these commands:
Logs
Updating nodegroup to add fargate execution role to cluster:
Creating fargate profile:
kubeconfig on wokernode that then goes into unready status:
aws-auth configmap
Kubelet logs after the node goes into unready status
$ journalctl -u kubelet -n 100
The text was updated successfully, but these errors were encountered: