Skip to content

Commit

Permalink
use correct ns in quota lab
Browse files Browse the repository at this point in the history
  • Loading branch information
splattner committed Jan 26, 2024
1 parent 076c94c commit 1c25d8c
Showing 1 changed file with 13 additions and 13 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -32,13 +32,13 @@ Defining ResourceQuotas makes sense when the cluster administrators want to have
In order to check for defined quotas in your Namespace, simply see if there are any of type ResourceQuota:

```bash
{{% param cliToolName %}} get resourcequota --namespace <namespace>
{{% param cliToolName %}} get resourcequota --namespace <namespace>-quota
```

To show in detail what kinds of limits the quota imposes:

```bash
{{% param cliToolName %}} describe resourcequota <quota-name> --namespace <namespace>
{{% param cliToolName %}} describe resourcequota <quota-name> --namespace <namespace>-quota
```

{{% onlyWhenNot openshift %}}
Expand Down Expand Up @@ -149,7 +149,7 @@ Remember to use the namespace `<username>-quota-test`, otherwise this lab will n
Analyse the LimitRange in your Namespace (there has to be one, if not you are using the wrong Namespace):

```bash
{{% param cliToolName %}} describe limitrange --namespace <namespace>
{{% param cliToolName %}} describe limitrange --namespace <namespace>-quota
```

The command above should output this (name and Namespace will vary):
Expand All @@ -166,7 +166,7 @@ Container cpu - - 10m 100m -
Check for the ResourceQuota in your Namespace (there has to be one, if not you are using the wrong Namespace):

```bash
{{% param cliToolName %}} describe quota --namespace <namespace>
{{% param cliToolName %}} describe quota --namespace <namespace>-quota
```

The command above will produce an output similar to the following (name and namespace may vary)
Expand Down Expand Up @@ -208,7 +208,7 @@ spec:
Apply this resource with:
```bash
{{% param cliToolName %}} apply -f pod_stress2much.yaml --namespace <namespace>
{{% param cliToolName %}} apply -f pod_stress2much.yaml --namespace <namespace>-quota
```

{{% alert title="Note" color="info" %}}
Expand All @@ -218,7 +218,7 @@ You have to actively terminate the following command pressing `CTRL+c` on your k
Watch the Pod's creation with:

```bash
{{% param cliToolName %}} get pods --watch --namespace <namespace>
{{% param cliToolName %}} get pods --watch --namespace <namespace>-quota
```

You should see something like the following:
Expand All @@ -236,7 +236,7 @@ stress2much 0/1 CrashLoopBackOff 1 20s
The `stress2much` Pod was OOM (out of memory) killed. We can see this in the `STATUS` field. Another way to find out why a Pod was killed is by checking its status. Output the Pod's YAML definition:

```bash
{{% param cliToolName %}} get pod stress2much --output yaml --namespace <namespace>
{{% param cliToolName %}} get pod stress2much --output yaml --namespace <namespace>-quota
```

Near the end of the output you can find the relevant status part:
Expand All @@ -255,7 +255,7 @@ Near the end of the output you can find the relevant status part:
So let's look at the numbers to verify the container really had too little memory. We started the `stress` command using the parameter `--vm-bytes 85M` which means the process wants to allocate 85 megabytes of memory. Again looking at the Pod's YAML definition with:

```bash
{{% param cliToolName %}} get pod stress2much --output yaml --namespace <namespace>
{{% param cliToolName %}} get pod stress2much --output yaml --namespace <namespace>-quota
```

reveals the following values:
Expand All @@ -279,7 +279,7 @@ Let's fix this by recreating the Pod and explicitly setting the memory request t
First, delete the `stress2much` pod with:

```bash
{{% param cliToolName %}} delete pod stress2much --namespace <namespace>
{{% param cliToolName %}} delete pod stress2much --namespace <namespace>-quota
```

Then create a new Pod where the requests and limits are set:
Expand Down Expand Up @@ -314,7 +314,7 @@ spec:
And apply this again with:
```bash
{{% param cliToolName %}} apply -f pod_stress.yaml --namespace <namespace>
{{% param cliToolName %}} apply -f pod_stress.yaml --namespace <namespace>-quota
```

{{% alert title="Note" color="info" %}}
Expand Down Expand Up @@ -356,7 +356,7 @@ spec:
```
```bash
{{% param cliToolName %}} apply -f pod_overbooked.yaml --namespace <namespace>
{{% param cliToolName %}} apply -f pod_overbooked.yaml --namespace <namespace>-quota
```

We are immediately confronted with an error message:
Expand All @@ -370,7 +370,7 @@ The default request value of 16 MiB of memory that was automatically set on the
Let's have a closer look at the quota with:

```bash
{{% param cliToolName %}} get quota --output yaml --namespace <namespace>
{{% param cliToolName %}} get quota --output yaml --namespace <namespace>-quota
```

which should output the following YAML definition:
Expand Down Expand Up @@ -421,7 +421,7 @@ spec:
And apply with:
```bash
{{% param cliToolName %}} apply -f pod_overbooked.yaml --namespace <namespace>
{{% param cliToolName %}} apply -f pod_overbooked.yaml --namespace <namespace>-quota
```

Even though the limits of both Pods combined overstretch the quota, the requests do not and so the Pods are allowed to run.

0 comments on commit 1c25d8c

Please sign in to comment.