Skip to content

Commit

Permalink
Kubernetes container limits documentation (#777)
Browse files Browse the repository at this point in the history
  • Loading branch information
wiktork authored Sep 2, 2021
1 parent 5a1624e commit f372d29
Show file tree
Hide file tree
Showing 2 changed files with 21 additions and 1 deletion.
2 changes: 1 addition & 1 deletion documentation/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ When running a dotnet application, differences in diverse local and production e
- [Getting Started](#)
- [Running on a local machine](#)
- [Running in Docker](#)
- [Running in a kubernetes cluster](#)
- [Running in a kubernetes cluster](./kubernetes.md)
- [Enabling SSL](#)
- [API Endpoints](./api/README.md)
- [OpenAPI document](./openapi.json)
Expand Down
20 changes: 20 additions & 0 deletions documentation/kubernetes.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,20 @@
# Running in Kubernetes

## Recommended container limits

```yaml
resources:
requests:
memory: "32Mi"
cpu: "50m"
limits:
memory: "256Mi"
cpu: "250m"
```
How much memory and CPU is consumed by dotnet-monitor is dependant on which scenarios are being executed:
- Metrics consume a negligible amount of resources, although using custom metrics can affect this.
- Operations such as traces and logs may require memory in the main application container that will automatically be allocated by the runtime.
- Resource consumption by trace operations is also dependent on which providers are enabled, as well as the [buffer size](./api/definitions.md#EventProvidersConfiguration) allocated in the runtime.
- It is not recommended to use highly verbose [log levels](./api/definitions.md#LogLevel) while under load. This causes a lot of CPU usage in the dotnet-monitor container and more memory pressure in the main application container.
- Dumps also temporarily increase the amount of memory consumed by the application container.

0 comments on commit f372d29

Please sign in to comment.