-
Notifications
You must be signed in to change notification settings - Fork 233
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix Python Shm Client Leak #614
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice find!
Haven't even put the PRs up for review and you've already found it. Nothing gets passed you! 😄 |
@@ -145,6 +145,8 @@ SharedMemoryRegionDestroy(void* shm_handle) | |||
return -5; | |||
} | |||
|
|||
delete handle; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
given the return statement above, this may not be enough.
does the failure to shm_unlink
mean that handle
is invalid and should not be deleted?
seems to me that the memory is still allocated.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good catch J! No, in this case handle
refers to a dynamically allocated instance of our internal struct definition as opposed to an OS handle. We are doing best-effort cleanup with munmap
and shm_unlink
, but we have full control to deallocate handle
.
Testing various cases with the new change also reminded me of a minor logical error within the python client code that we should clean up here as well. Note, that I elected to use a local smart pointer to do cleanup to avoid having to write three proximal delete
s, however, I can switch to using an explicit delete
to match the allocation syntax.
Goal: The python client was not properly deallocating dynamic memory when destroying shared memory regions. This PR bundle fixes the leak and adds an additional test case.
Server PR: triton-inference-server/server#7172