-
-
Notifications
You must be signed in to change notification settings - Fork 89
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Is this fixed: "can't send large data back to main thread"? #217
Comments
For which version of multiprocess and python are you experiencing this error? Also, if you can give a simple example here that reproduces the error, that would also be good. |
Thanks for a quick reply. Heres my helper module that suppose to apply a function defined in "func_module_name.py" module to a tensor passed into parallel_tensor_apply(), in a shared memory manner.
However when using Multiprocess with Jupyter Notebook Im getting the following error after (i think) the applied function successfully executed and try to return the res. Im not sure whether its my shared memory code that was wrong or the Multiprocess library bug yet. But im suspecting the library still contains the bug mentioned in the previous mentioned links.
Which is why im asking whether the fork u guys done (which is a good job btw) included those fixes. |
and i guess we can just assume that func_module.proc() simply returns a tensor with size larger than 2GB for any input. |
How does this code run? It seems that some of your example is missing. Can you also simplify it to the minimum that exhibits the behavior you are experiencing? Also, does the behavior only happen when run from a Jupyter notebook? |
Heres an absolute minimal example that should be equivalent:
Error msg when running from cmd with Multiprocessing (yes even that):
To answer ur question, yes the error appeared everywhere in all: So I could've been totally wrong, that the error could have lay in the Multiprocessing library itself instead of Multiprocess, and the fix in the links I posted may not been perfect, so the error somehow remained for this scenario. Although for my case I found a way to walkaround to not use 2GB large objects in children, however if u find this bug interesting feel free to investigate further, otherwise lets close this case as the bug could've been in Multiprocessing. |
closing this as a duplicate of #150 |
The written problem was suppose to be fixed in:
https://stackoverflow.com/questions/47692566/python-multiprocessing-apply-async-assert-left-0-assertionerror
python/cpython#9027
However Im still encountering this error when using Multiprocess.
I wonder whether that fix was merged in Multiprocess?
The text was updated successfully, but these errors were encountered: