Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[3.7] bpo-6721: Hold logging locks across fork() (GH-4071) #9291

Merged
merged 1 commit into from
Oct 7, 2018

Conversation

miss-islington
Copy link
Contributor

@miss-islington miss-islington commented Sep 14, 2018

bpo-6721: When os.fork() was called while another thread holds a logging lock, the child process may deadlock when it tries to log. This fixes that by acquiring all logging locks before fork and releasing them afterwards.

A regression test that fails before this change is included.

Within the new unittest itself: There is a small potential due to mixing of fork and a thread in the child process if the parent's thread happened to hold a non-reentrant library call lock (malloc?) when the os.fork() happens. buildbots and time will tell if this actually manifests itself in this test or not. :/ A functionality test that avoids that would be a challenge.

An alternate test that isn't trying to produce the deadlock itself but just checking that the release and acquire calls are made would be the next best alternative if so.
(cherry picked from commit 1900384)

Co-authored-by: Gregory P. Smith [email protected]

https://bugs.python.org/issue6721

bpo-6721: When os.fork() was called while another thread holds a logging lock, the child process may deadlock when it tries to log.  This fixes that by acquiring all logging locks before fork and releasing them afterwards.

A regression test that fails before this change is included.

Within the new unittest itself: There is a small _potential_ due to mixing of fork and a thread in the child process if the parent's thread happened to hold a non-reentrant library call lock (malloc?) when the os.fork() happens.  buildbots and time will tell if this actually manifests itself in this test or not.  :/  A functionality test that avoids that would be a challenge.

An alternate test that isn't trying to produce the deadlock itself but just checking that the release and acquire calls are made would be the next best alternative if so.
(cherry picked from commit 1900384)

Co-authored-by: Gregory P. Smith <[email protected]>
@gpshead
Copy link
Member

gpshead commented Sep 14, 2018

I'm letting this change "bake" in master for a bit, monitoring buildbots, before approving this 3.7 backport.

@gpshead gpshead self-assigned this Sep 14, 2018
@miss-islington
Copy link
Contributor Author

@gpshead: Status check is done, and it's a success ✅ .

2 similar comments
@miss-islington
Copy link
Contributor Author

@gpshead: Status check is done, and it's a success ✅ .

@miss-islington
Copy link
Contributor Author

@gpshead: Status check is done, and it's a success ✅ .

@ned-deily
Copy link
Member

@gpshead Is this ready for 3.7?

@gpshead gpshead merged commit 3b69993 into python:3.7 Oct 7, 2018
@bedevere-bot
Copy link

@gpshead: Please replace # with GH- in the commit message next time. Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
sprint type-bug An unexpected behavior, bug, or error
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants