Skip to content
This repository has been archived by the owner on Jul 21, 2023. It is now read-only.

EKS cluster and node upgrades #1197

Closed
tgeoghegan opened this issue Dec 9, 2021 · 1 comment · Fixed by #1297
Closed

EKS cluster and node upgrades #1197

tgeoghegan opened this issue Dec 9, 2021 · 1 comment · Fixed by #1297

Comments

@tgeoghegan
Copy link
Contributor

On GKE, cluster and node upgrades are automatic. On EKS, upgrades are mostly managed but must be manually triggered by the customer. We can manage EKS cluster and worker node versions via Terraform. We also have to make sure to update the cluster-autoscaler container image version to make sure its minor version matches the Kubernetes control plane minor version. We should plumb variables for cluster and cluster-autoscaler version up into .tfvars and document a process for cluster updates.

@tgeoghegan tgeoghegan added this to the Winter 2021-2022 stability milestone Dec 9, 2021
@tgeoghegan
Copy link
Contributor Author

We should do this after #1046, because applying a cluster version change affects the Kubernetes cluster resource we use to configure the Kubernetes provider.

tgeoghegan added a commit that referenced this issue Jan 19, 2022
Plumbs variables for versions of a few EKS cluster components into
environment .tfvars files so that we can perform upgrades via Terraform.
Specifically we configure:

  - The version of the EKS cluster itself. The EKS cluster version is derived
    from the upstream Kubernetes version. See [1] for details.
  - The version of the AMI used on worker nodes. The EKS API can figure out an
    appropriate AMI from the cluster Kubernetes version. The `aws` provider
    documentation[2] suggests we could omit this but we explicitly set
    the same version as the cluster, since the EKS documentation[3]
    states that we have to update the node groups after updating the
    cluster.
  - The VPC CNI add-on version. We must pick a version that is
    compatible with the cluster version, per EKS documentation[4].
  - The cluster autoscaler version. We must deploy a version that is
    compatible with the cluster Kubernetes version. See [5] for
    available versions.

[1]: https://docs.aws.amazon.com/eks/latest/userguide/update-cluster.html
[2]: https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/eks_node_group
[3]: https://docs.aws.amazon.com/eks/latest/userguide/update-managed-node-group.html
[4]: https://docs.aws.amazon.com/eks/latest/userguide/managing-vpc-cni.html
[5]: https://github.com/kubernetes/autoscaler/releases

Resolves #1197
tgeoghegan added a commit that referenced this issue Jan 19, 2022
Plumbs variables for versions of a few EKS cluster components into
environment .tfvars files so that we can perform upgrades via Terraform.
Specifically we configure:

  - The version of the EKS cluster itself. The EKS cluster version is derived
    from the upstream Kubernetes version. See [1] for details.
  - The version of the AMI used on worker nodes. The EKS API can figure out an
    appropriate AMI from the cluster Kubernetes version. The `aws` provider
    documentation[2] suggests we could omit this but we explicitly set
    the same version as the cluster, since the EKS documentation[3]
    states that we have to update the node groups after updating the
    cluster.
  - The VPC CNI add-on version. We must pick a version that is
    compatible with the cluster version, per EKS documentation[4].
  - The cluster autoscaler version. We must deploy a version that is
    compatible with the cluster Kubernetes version. See [5] for
    available versions.

[1]: https://docs.aws.amazon.com/eks/latest/userguide/update-cluster.html
[2]: https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/eks_node_group
[3]: https://docs.aws.amazon.com/eks/latest/userguide/update-managed-node-group.html
[4]: https://docs.aws.amazon.com/eks/latest/userguide/managing-vpc-cni.html
[5]: https://github.com/kubernetes/autoscaler/releases

Resolves #1197
tgeoghegan added a commit that referenced this issue Feb 14, 2022
Plumbs variables for versions of a few EKS cluster components into
environment .tfvars files so that we can perform upgrades via Terraform.
Specifically we configure:

  - The version of the EKS cluster itself. The EKS cluster version is derived
    from the upstream Kubernetes version. See [1] for details.
  - The version of the AMI used on worker nodes. The EKS API can figure out an
    appropriate AMI from the cluster Kubernetes version. The `aws` provider
    documentation[2] suggests we could omit this but we explicitly set
    the same version as the cluster, since the EKS documentation[3]
    states that we have to update the node groups after updating the
    cluster.
  - The VPC CNI add-on version. We must pick a version that is
    compatible with the cluster version, per EKS documentation[4].
  - The cluster autoscaler version. We must deploy a version that is
    compatible with the cluster Kubernetes version. See [5] for
    available versions.

[1]: https://docs.aws.amazon.com/eks/latest/userguide/update-cluster.html
[2]: https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/eks_node_group
[3]: https://docs.aws.amazon.com/eks/latest/userguide/update-managed-node-group.html
[4]: https://docs.aws.amazon.com/eks/latest/userguide/managing-vpc-cni.html
[5]: https://github.com/kubernetes/autoscaler/releases

Resolves #1197
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant