-
Notifications
You must be signed in to change notification settings - Fork 704
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bug - Deadlock when using multithreaded & multiprocessing Environment #231
Comments
Yeah, it's a problem that's been hanging in my backlog for a while. This is the same problem as with the standard Anyway, thanks a lot for the report and the fully workable reproducible example! I will implement a fix for the next version. |
What a journey! There was not one, not two, but three bugs here! 😄 1. Deadlock while mixing
|
@kapsh I will try to release a new version this week-end. However, I don't encounter any deadlock wrapping |
Sorry, no snippet (yet). This is quite tangled code that logs from main process, from subprocess, into file, into another file, etc, etc. But I'll try to narrow down minimal reproducible piece from it. -- Alexander "kapsh" [email protected] 14.05.2020, 22:23, "Delgan" <[email protected]>: @kapsh I will try to release a new version this week-end.However, I don't encounter any deadlock wrapping Process.run(). Could you please open a new ticket with the snippet that triggered the bug?—You are receiving this because you were mentioned.Reply to this email directly, view it on GitHub, or unsubscribe.
|
@kapsh You can now update If you encounter any other problem, please open a new issue so I can investigate it. 😉 |
@Delgan meanwhile I couldn't catch deadlock in my script (pretty annoying to reproduce) yet I suspect it is caused by my own interaction with queues. |
I can still reproduce this issue "2. Deadlock while using enqueue=True with multiprocessing" with version 0.5.3, am I missing something? I used these code to test:
|
@ahuan2616 which python version are you running when this happens? In my case I concluded that some 3.6 release is responsible, because it stopped reproducing under 3.9. |
@ahuan2616 Sorry, it seems I forgot to mention that the fix requires at least Python 3.7 as @kapsh suggested. |
It seems in Python 3.8.0 and loguru 0.6.0, it still goes into a deadlock. Now I don't have spare time to make a detailed analysis, but I can post some info for you. Environment: Linux xxxxxx 5.15.0-50-generic #56-Ubuntu SMP Tue Sep 20 13:23:26 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux Python 3.8.0 (default, Nov 6 2019, 21:49:08)
In Process 0: It runs normally and reaches a sync command (after printing try resize) 2022-10-22 06:50:52.848 | DEBUG | yolox.core.trainer:after_iter:280 - print info msg In Process 1: It gets stuck in logger.info. 2022-10-22 06:50:52.835 | DEBUG | yolox.core.trainer:after_iter:278 - get gpu number In Process 2: It gets stuck in logger.info before I kill process 1, and then it jumps out of deadlock (because I kill process1), and continues to run and reaches a sync command (after printing try resize). It gets stuck for 1 hour, you can see the timestamp. 2022-10-22 06:50:52.837 | DEBUG | yolox.core.trainer:after_iter:278 - get gpu number Short discussion: So the message string is created normally because the timestamp is normal, and it gets stuck when outputting the message. Killing a process can release the deadlock. My temporary solution is not to log in more than 1 subprocess. I will update more info when I get time. |
Hi,
Python version: 3.6.8
loguru version: 0.4.1
OS: Linux
Dev environment: Terminal
Bug Description: Creating a new process during logging from another thread will cause deadlock inside the new process whenever it calls the logger.
I assume that it happens because handler's _lock is already taken whenever the process is created, resulting that _lock will be taken forever (from the new process perspective) [see 1st comment in here].
Reproduce code:
The text was updated successfully, but these errors were encountered: