Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[EKS]: One-click Full Cluster Upgrade #600

Open
mohitanchlia opened this issue Nov 26, 2019 · 15 comments
Open

[EKS]: One-click Full Cluster Upgrade #600

mohitanchlia opened this issue Nov 26, 2019 · 15 comments
Labels
EKS Amazon Elastic Kubernetes Service Proposed Community submitted issue

Comments

@mohitanchlia
Copy link

Automation in EKS as a service has come in only bits and pieces. EKS with managed nodes is not really that useful without having a one click full upgrade where EKS version, aws-node, dns etc. along with the worker nodes get upgraded without running or orchestrating commands manually. This can be broken down into 2 pieces 1) Full cluster upgrade with nodes 2) Only worker node upgrades for ongoing AMI rotation. This is a critical functionality

@mohitanchlia mohitanchlia added the Proposed Community submitted issue label Nov 26, 2019
@tabern tabern added the EKS Amazon Elastic Kubernetes Service label Nov 26, 2019
@tabern
Copy link
Contributor

tabern commented Nov 27, 2019

Renaming this to 'One-click Full Cluster Upgrade'. Today EKS managed nodes supports worker node upgrades for AMI rotation, but this is not yet in the console (#605). See API documentation here: https://docs.aws.amazon.com/eks/latest/APIReference/API_UpdateNodegroupVersion.html

@tabern tabern changed the title [EKS] [Auto Upgrade]: One click upgrade [EKS]: One-click Full Cluster Upgrade Nov 27, 2019
@mtparet
Copy link

mtparet commented Jan 2, 2020

I just read the official documentation of AWS to upgrade an eks cluster : we have to manually execute kubectl commands to upgrade critical components of Kubernetes.
Really, for a "managed" service, is it a joke ?
Even updating my on-premise cluster is easier than updating the called "managed" AWS Kubernetes service.

https://docs.aws.amazon.com/eks/latest/userguide/update-cluster.html

@irperez
Copy link

irperez commented Jan 3, 2020

In Azure AKS, this is a single click operation with a version drop down.
image

@sandrom
Copy link

sandrom commented Jan 14, 2020

The upgrade process cant get enough love, please work on this! It is incredibly important for production workloads to have a managed process for this :)

@raravena80
Copy link

+1 for this

@ghost
Copy link

ghost commented Jan 16, 2020

+1
EKS need to close the gaps with AKS and GKE

@nydalal
Copy link

nydalal commented Jul 14, 2020

+1

@damscott
Copy link

This feature could impact people working under the assumption that AWS will not modify resources running inside the cluster, notably kube-proxy (#657)

@kylecompassion
Copy link

+1

@alexey-pankratyev
Copy link

+1

@smrutiranjantripathy
Copy link

smrutiranjantripathy commented Nov 30, 2021

Hi Team,

There is a sample package eks-one-click-cluster-upgrade in aws-samples which provides similar functionality. This is a cli utility which can be used to carry out upgrade. Please check this package and share your feedback.

@kareem-elsayed
Copy link

We still can say that EKS it's not a fully managed k8s engine if we are going for every release to spend time and effort to check the release note for every addon.

@MMartyn
Copy link

MMartyn commented Sep 8, 2022

One other aspect of upgrades that would be great to be a part of this effort would be the ability to configure the upgrade timeout for node groups. Currently if it takes longer than 15 minutes to replace a node it fails the upgrade and rolls back.

Related doc: https://docs.aws.amazon.com/eks/latest/userguide/managed-node-update-behavior.html
Relevant part:

Drains the pods from the node. If the pods don't leave the node within 15 minutes and there's no force flag, the upgrade phase fails with a PodEvictionFailure error. For this scenario, you can apply the force flag with the update-nodegroup-version request to delete the pods.

@neerajprem
Copy link

+1, we must have it.

@eperdeme
Copy link

I'm not really understanding the value here. In a cluster where you've got many addons to make the cluster function such as istio, Argo, external-dns, Prometheus, cert-manager...etc.etc. VPC CNI/kube-proxy bumping is trivial task and easily automated via your gitops managment methods.

Is the target for the customers with more out of the box Kubernetes with a basic set of aws provided addons ?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
EKS Amazon Elastic Kubernetes Service Proposed Community submitted issue
Projects
None yet
Development

No branches or pull requests