-
Notifications
You must be signed in to change notification settings - Fork 127
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
conmon high cpu after a container exec via the rest API #221
Comments
codemaker219
referenced
this issue
in bitsolve/podman
Dec 16, 2020
Thanks for reaching out, @codemaker219! I can reproduce but only cgroups v1 (crun and runc). I cannot reproduce on cgroups v2 (crun). @giuseppe could you have a look? |
giuseppe
added a commit
to giuseppe/conmon
that referenced
this issue
Dec 18, 2020
now that we use a delay to call the cleanup program, we might end up in a race where the event fd used by glib is close'd and it causes the blig event handler to keep polling the closed file descriptor in a tight loop. To avoid closing files that are handled by glib, store what FDs are opened when conmon first started and close only them. Closes: containers#221 Signed-off-by: Giuseppe Scrivano <[email protected]>
giuseppe
added a commit
to giuseppe/conmon
that referenced
this issue
Dec 18, 2020
now that we use a delay to call the cleanup program, we might end up in a race where the event fd used by glib is close'd and it causes the glib event handler to keep polling the closed file descriptor in a tight loop. To avoid closing files that are handled by glib, store what FDs are opened when conmon first started and close only them. Closes: containers#221 Signed-off-by: Giuseppe Scrivano <[email protected]>
PR here: #222 |
giuseppe
added a commit
to giuseppe/conmon
that referenced
this issue
Dec 18, 2020
now that we use a delay to call the cleanup program, we might end up in a race where the event fd used by glib is close'd and it causes the glib event handler to keep polling the closed file descriptor in a tight loop. To avoid closing files that are handled by glib, store what FDs are opened when conmon first started and close only them. Closes: containers#221 Signed-off-by: Giuseppe Scrivano <[email protected]>
giuseppe
added a commit
to giuseppe/conmon
that referenced
this issue
Dec 22, 2020
now that we use a delay to call the cleanup program, we might end up in a race where the event fd used by glib is close'd and it causes the glib event handler to keep polling the closed file descriptor in a tight loop. To avoid closing files that are handled by glib, store what FDs are opened when conmon first started and close only them. Closes: containers#221 Signed-off-by: Giuseppe Scrivano <[email protected]>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)
/kind bug
Description
After multiple exec on a container, conman processes sporn that take 100% CPU for 5 mins
Steps to reproduce the issue:
Create a container
podman run --rm --name test -d docker.io/nginx
Start the service for the REST API
podman system service tcp:0.0.0.0:8090 -t0
Do MULTIPLE exec into the conatiner e.G.
for i in {1..20}; do podman --remote --url tcp://127.0.0.1:8090 exec test ls; done
(Maybe must even be executed 2 or 3 times)Describe the results you received:
After that some conman processes sporn that take ~100% CPU
Describe the results you expected:
Normal CPU usage :-)
Additional information you deem important (e.g. issue happens only occasionally):
Seems to be irrelevant what container and which command is executed, but it only happens when you execute it about 10 times or more.
Output of
podman version
:Output of
podman info --debug
:Package info (e.g. output of
rpm -q podman
orapt list podman
):Have you tested with the latest version of Podman and have you checked the Podman Troubleshooting Guide?
Yes
Additional environment details (AWS, VirtualBox, physical, etc.):
Tested in a VirtualBox vm
The text was updated successfully, but these errors were encountered: