Skip to content

Commit

Permalink
Fix MNG scale command missing $
Browse files Browse the repository at this point in the history
  • Loading branch information
niallthomson committed Aug 30, 2024
1 parent 3e5ab4a commit 26dba28
Show file tree
Hide file tree
Showing 2 changed files with 12 additions and 19 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ $ eksctl get nodegroup --name $EKS_DEFAULT_MNG_NAME --cluster $EKS_CLUSTER_NAME
We'll scale the nodegroup in `eks-workshop` by changing the node count from `3` to `4` for **desired capacity** using below command:

```bash
aws eks update-nodegroup-config --cluster-name $EKS_CLUSTER_NAME \
$ aws eks update-nodegroup-config --cluster-name $EKS_CLUSTER_NAME \
--nodegroup-name $EKS_DEFAULT_MNG_NAME --scaling-config minSize=4,maxSize=6,desiredSize=4
```

Expand All @@ -24,29 +24,21 @@ After making changes to the node group it may take up to **2-3 minutes** for nod
$ eksctl get nodegroup --name $EKS_DEFAULT_MNG_NAME --cluster $EKS_CLUSTER_NAME
```

To wait until the node group update operation is complete you can run this command:

```bash hook=wait-node
$ aws eks wait nodegroup-active --cluster-name $EKS_CLUSTER_NAME --nodegroup-name $EKS_DEFAULT_MNG_NAME
```

Once the command above completes we can review the changed worker node count with following command, which lists all nodes in our managed node group by using the label as a filter:
Monitor the nodes in the cluster using the following command with the `--watch` argument until there are 4 nodes:

:::tip
It can take a minute or so for the node to appear in the output below, if the list still shows 3 nodes be patient.
:::

```bash
$ kubectl get nodes -l eks.amazonaws.com/nodegroup=$EKS_DEFAULT_MNG_NAME
NAME STATUS ROLES AGE VERSION
ip-10-42-104-151.us-west-2.compute.internal Ready <none> 2d23h vVAR::KUBERNETES_NODE_VERSION
ip-10-42-144-11.us-west-2.compute.internal Ready <none> 2d23h vVAR::KUBERNETES_NODE_VERSION
ip-10-42-146-166.us-west-2.compute.internal NotReady <none> 18s vVAR::KUBERNETES_NODE_VERSION
ip-10-42-182-134.us-west-2.compute.internal Ready <none> 2d23h vVAR::KUBERNETES_NODE_VERSION
```bash hook=wait-node
$ kubectl get nodes --watch
NAME STATUS ROLES AGE VERSION
ip-10-42-104-151.us-west-2.compute.internal Ready <none> 3h vVAR::KUBERNETES_NODE_VERSION
ip-10-42-144-11.us-west-2.compute.internal Ready <none> 3h vVAR::KUBERNETES_NODE_VERSION
ip-10-42-146-166.us-west-2.compute.internal NotReady <none> 18s vVAR::KUBERNETES_NODE_VERSION
ip-10-42-182-134.us-west-2.compute.internal Ready <none> 3h vVAR::KUBERNETES_NODE_VERSION
```

Notice that the node shows a status of `NotReady`, which happens when the new node is still in the process of joining the cluster. We can also use `kubectl wait` to watch until all the nodes report `Ready`:
Once 4 nodes are visible you can exit the watch using `Ctrl+C`.

```bash hook=add-node
$ kubectl wait --for=condition=Ready nodes --all --timeout=300s
```
You may see a node shows a status of `NotReady`, which happens when the new node is still in the process of joining the cluster.
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,7 @@ after() {

if [ $EXIT_CODE -ne 0 ]; then
>&2 echo "Node count did not increase to 4 as expected"
kubectl get node
exit 1
fi
}
Expand Down

0 comments on commit 26dba28

Please sign in to comment.