You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When we stress test the Seldon model, we find that OOM errors occur after sending a large number of requests.We monitored the containers in the pod and discovered the model container reaches its memory limit and is eventually killed due to OOM. And when we disabled the logging of request and response payloads from Seldon Deployment(by comment the logger), it won't be OOM. I wonder what may be the real reason of OOM. Is it because there are too many logs in memory, resulting in oom?
Describe the bug
When we stress test the Seldon model, we find that OOM errors occur after sending a large number of requests.We monitored the containers in the pod and discovered the model container reaches its memory limit and is eventually killed due to OOM. And when we disabled the logging of request and response payloads from Seldon Deployment(by comment the logger), it won't be OOM. I wonder what may be the real reason of OOM. Is it because there are too many logs in memory, resulting in oom?
graph:
children: []
endpoint:
type: "REST"
#logger:
# mode: "all"
The text was updated successfully, but these errors were encountered: