-
-
Notifications
You must be signed in to change notification settings - Fork 4.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
EKS - Kubernetes 1.11 Upgrade #214
Comments
Agree 100%. In my experience the current behaviour automatically updates the AMI, which is not good. I think we need to wait for the terraform resource |
I was just investigating that side of things. Yeah, any actual changes should be in line with the provider hopefully not forcing a new resource 🤞 I can always split the AMI filter issue from the upgrade issue so that the module at least is consistently using 1.10 resources by default prior to upgrade support. |
That would be great, it needs fixing anyway 💛 |
Might not have to wait too long for either. |
For AMI versioning, are you thinking pinning it to a specific AMI revision per module release? e.g. |
Something like that. Or can we select the latest minor version of the major cluster version using the AMI data source filters? |
Regarding AMI issue, how does this look? #215 |
In regards to upgrading existing clusters, I just triggered the update on the EKS console, it took about 25 mins. Then updated to the latest |
@max-rocket-internet does that mean you left your value for the cluster version to 1.10 in the tf file after the upgrade you did via the console? |
Went through the upgrade process via terraform directly with 2.0.0 tag - no UI
|
Is it possible to update existing workers to new AMI? |
Nope because updating to version
So TF did the upgrade? For me it wanted to recreate the whole cluster? |
Yeah, as long as you have the updated aws provider (1.52.0), it works as expected.
|
I think we can close this now. |
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further. |
I have issues
With the release of EKS 1.11 Upgrades, this module should decide it default posture on upgrades moving forward.
https://aws.amazon.com/blogs/compute/making-cluster-updates-easy-with-amazon-eks/
Questions for maintainers @max-rocket-internet @brandoconnor
Cluster Defaults
Should this module always default to the current 1.10 setup, and require users to explicitly upgrade (favors existing users of this module)
OR
Should this module adhere to the current EKS default (e.g. 1.11.5), requiring existing users to explicitly set the version (e.g. 1.10) if using the previous default and they don't wish to upgrade their cluster on the next terraform module release (e.g. 1.9.0)
I personally lean towards matching the EKS defaults (which is what the terraform aws provider does), and proper communication in CHANGELOG, and encouraging module users to explicitly set their version rather than depend on the default value.
Note, to confirm, changing the version currently forces a new resource at this time
version: "1.10" => "1.11" (forces new resource)
Default AMI Filter
Current AMI filter is now too generic for the new EKS optimized AMI naming convention:
Prior to 1.11 upgrade availability:
amazon-eks-node-vXX
After:
amazon-eks-node-<K8S-VERSION>-v<DATE>
The only reason the current module will continue to pick the 1.10 AMI to match its current 1.10 cluster default is because AWS provisioned it later than the 1.11 AMI
If AWS creates a new 1.11 AMI after a new 1.10 AMI, this module will pick that up the 1.11 one as latest, potentially mis-matching with the defaults.
I'm submitting a...
What's the expected behavior?
Upgrade posture documented and AMI filters updated to match upgrade posture.
Are you able to fix this problem and submit a PR? Link here if you have already.
I can depending on the decision by module maintainers on upgrade support.
The text was updated successfully, but these errors were encountered: