Skip to content
This repository has been archived by the owner on Apr 25, 2024. It is now read-only.

Commit

Permalink
Merge pull request #552 from aws-samples/geremyCohen-oom-detail-473
Browse files Browse the repository at this point in the history
Update readme.adoc
  • Loading branch information
dalbhanj authored Aug 24, 2018
2 parents e9437ca + d91196a commit dac8e1f
Showing 1 changed file with 5 additions and 1 deletion.
6 changes: 5 additions & 1 deletion 01-path-basics/103-kubernetes-concepts/readme.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -311,6 +311,10 @@ Watch the status of the Pod:

`OOMKilled` shows that the container was terminated because it ran out of memory.

To correct this, we'll need to re-create the pod with higher memory limits.

Although it may be instinctive to simply adjust the memory limit in the existing pod definition and re-apply it, Kubernetes does not currently support changing resource limits on running pods, so we'll need to first delete the existing pod, then recreate it.

In `pod-resources2.yaml`, confirm that the value of `spec.containers[].resources.limits.memory` is `300Mi`. Delete the existing Pod, and create a new one:

$ kubectl delete -f pod-resources1.yaml
Expand All @@ -331,7 +335,7 @@ Get more details about the resources allocated to the Pod:

=== Quality of service

Kubernetes opportunistically scavenge the difference between request and limit if they are not used by the Containers. This allows Kubernetes to oversubscribe nodes, which increases utilization, while at the same time maintaining resource guarantees for the containers that need guarantees.
Kubernetes opportunistically scavenges the difference between request and limit if they are not used by the Containers. This allows Kubernetes to oversubscribe nodes, which increases utilization, while at the same time maintaining resource guarantees for the containers that need guarantees.

Kubernetes assigns one of the QoS classes to the Pod:

Expand Down

0 comments on commit dac8e1f

Please sign in to comment.