-
Notifications
You must be signed in to change notification settings - Fork 431
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Timeout waiting for IOPub output #426
Comments
Thanks for raising the issue and making a clearly reproducible case. So the fix in jupyter/nbconvert#994 helps with the issue but doesn't make it impossible to occur. I can reproduce with the latest nbconvert which includes to mentioned changed. The part that's causing the issue is that the zmq buffer can't take is the number of tiny messages per nbconvert cycle. If the message rate is modified by sleeping longer:
or the I've touched pretty much every part of the code up to the pyzmq layer now and we can make it slightly better but mostly there is a hardish limit to max message rate a kernel client can handle. It may be worth making an issue on https://github.com/ipython/ipykernel to see if that kernel could apply backpressure or action skip to the sys flush call to prevent very high flush rates for just kernel executions. All that being said, I'd be amendable to making |
My original use case isn't necessarily lots of small messages, but rather lots of pngs. Something like the following: from IPython.display import display, Markdown, Image
len(directories) # roughly 900
for d in directories:
display(Markdown(f'# {d}'))
for png in ['a.png', 'b.png', 'c.png']: # Roughly 200KB, 200KB, and 50KB on disk
display(Image(filename=f'{d}/{png}'))
sleep(0.5)
x = 'hello'
# next cell
print(x) The papermill output ipynb file is ~120MB and shows pngs for roughly ~500 of the 900 directories, and Is there a limit on size (in bytes) somewhere as well? Either way, I'd be happy with an error being raised for now. |
While there's no explicit limit to a notebook output size, I would say that anything above 100MB is in the realm of "this will crash browsers". Papermill will actually handle very large notebooks better than browsers (it's only limited by rate of messages), but the format still doesn't support large files well. |
Other papermill devs @mpacer @captainsafia @rgbkrk @willingc , on the topic of raising an error for the buffer overload case I think this would be a reasonable change but it would differ from the default from nbconvert that's been held for a long time. |
Gosh, anything above 20 MB will hang most browsers. |
I think raising an error in papermill's case makes sense. How reproducible is the notebook if data is dropped? |
Highly reproducible as far as I could tell from my local tests. |
I think raising an error makes sense. We can clarify in the docs and mention the difference in default behavior from nbconvert in the docs/docstring. |
I'll work on making that change then. |
See the following links for details on the issues with IOPub and nbconvert's ExecutePreprocessor: - nteract/papermill#426 (comment) - jupyter/nbconvert#994
Change made in the 2.0 release (was a single config line so I skipped the PR) |
We started getting
This is almost certainly caused by this change from v2.0.0 finally kicking in: |
Hello!
Issue: When a cell
takes too long to executefills up the IOPub channel ZMQ buffer, papermill (or nbconvert?) (almost) silently trims off the output and carries on to the next cell. I believe it should raise an error and exit with non-zero.It appears this is addressed in jupyter/nbconvert#994, but I can't tell if the fix there will solve this issue.
Thank you!
To reproduce: Make a notebook with a single cell (adapted from jupyter/nbconvert#659 (comment)):
Then, execute it. A warning is printed, but exit code is zero, and the
'hi'
is not printed.The text was updated successfully, but these errors were encountered: