Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fixing #659 (output cutoff and iopub timeout) #994

Merged
merged 16 commits into from
May 17, 2019
Merged

Conversation

djherbis
Copy link
Contributor

@MSeal @mpacer This is probably a little rough but figured this might be a good place to start.

I still need to add a test showing that this fixes my example. #659

I'll follow up with changing things towards this #659 (comment), though the reason that this seems more complex than the suggestion there was I was trying to maintain the current behavior in as many cases as possible.

In particular, the test cases seemed to fail with a "The handle is invalid" error if exec_reply is polled too late, but I'll still give a shot at your suggestion and let you know how it goes.

I'm busy today but might try to play with this Sunday or when I get back to work on Monday.

Copy link
Contributor

@MSeal MSeal left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Overall the approach looks good. After re-reading it more carefully I'm not sure we can do much better on the timeout handling as I suggested might be a way to go. I can read this accurately and it is a good implementation of the constraints.

You will have a rather annoying merge conflict (apologies for that) with some refactors of the run_cell that got merged. If you need help resolving those conflicts I can push the resolution onto your PR branch.

Next steps after conflicts would be tests, which we'll want a few to cover the new timeout conditions as well as cover the original failure mode you had. Likely we'll have to emulate the original failure with mocking either zeromq channel setup or by mimicing the behavior of losing messages after a point.

nbconvert/preprocessors/execute.py Outdated Show resolved Hide resolved
nbconvert/preprocessors/execute.py Outdated Show resolved Hide resolved
nbconvert/preprocessors/execute.py Show resolved Hide resolved
nbconvert/preprocessors/execute.py Outdated Show resolved Hide resolved
nbconvert/preprocessors/execute.py Outdated Show resolved Hide resolved
nbconvert/preprocessors/execute.py Outdated Show resolved Hide resolved
nbconvert/preprocessors/execute.py Outdated Show resolved Hide resolved
@djherbis
Copy link
Contributor Author

I seem to be running into some flakiness with the tests, everything looks to pass on my machine, but travis is randomly pass/fail for a few diff methods, but it does look related to my changes possibly (possible the test is inherently racy too?)

@MSeal
Copy link
Contributor

MSeal commented Apr 24, 2019

So the python 3 failure you're seeing is some race that only occurs on Travis. I've had to restart many Travis Builds to avoid it. I haven't been able to reproduce the race on my local machine yet to figure out why the kernel is dieing. Might be related to resources on travis or slow disks not responding before a timeout? For your PR we'll need to look closely at each Kernel Timeout since we're touching that logic, but likely if it passes on a restart build action it's a pre-existing test issue.

For python 2 you have some 2/3 compatibility issues in the code you'll need to resolve.

@SylvainCorlay
Copy link
Member

So the python 3 failure you're seeing is some race that only occurs on Travis. I've had to restart many Travis Builds to avoid it. I haven't been able to reproduce the race on my local machine yet to figure out why the kernel is dieing.

Seeing that also on other PRs related to kernels.

@djherbis
Copy link
Contributor Author

@MSeal I merged in latest and fixed the compat issues, what would you like next?

@djherbis
Copy link
Contributor Author

Gah, more failures, I'll fix them

@MSeal
Copy link
Contributor

MSeal commented May 2, 2019

Thanks a ton for getting the PR updated @djherbis -- and sorry for all the merge conflicts with the 5.5 release stuff.

I think the next steps are making some tests that mock the behavioral change. Specifically where we pass the deadline in both the exec_reply and get_msgs code paths, for both exception raising an non-exception raising cases.

@MSeal
Copy link
Contributor

MSeal commented May 2, 2019

Figured out that random test failure btw, it's unrelated to your changes.

Let me know if you need any help with getting tests added.

@djherbis
Copy link
Contributor Author

djherbis commented May 2, 2019

@MSeal running into a little bit of pain for tests, I was currently adding them under test_execute.py::TestRunCell. Main issues seems to be with controlling the timeout in there.

(no way to introduce a delay at the moment, and using timeout=0 gets converted to timeout=None internally).

Should I be adding these tests under test_execute.py::TestExecute::test_run_notebooks, or somewhere else?

@MSeal
Copy link
Contributor

MSeal commented May 2, 2019

Yea under TestRunCell is probably right. The way to achieve this test is to mock the kc.shell_channel.get_msg and kc.iopub_channel.get_msg for each pattern respectfully, which prepare_cell_mocks does for message receiving. You can replace those mocks responses via preprocessor.kc.shell_channel.return_value = ... or preprocessor.kc.iopub_channel.side_effect = [...].

If you need help getting those test mocks setup I can probably help push a starting point onto your PR branch later today.

@djherbis
Copy link
Contributor Author

djherbis commented May 2, 2019

I tried directly mocking the get_msg calls for some exceptional cases, let me know if this is an alright way to accomplish it.

@djherbis
Copy link
Contributor Author

djherbis commented May 5, 2019

Thanks that makes more sense, wasn't super clear to me how to use the mock that way.

Anything else left?


if more_output:
try:
timeout = self.iopub_timeout
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So now with all the functionality in place, we likely need to address the fact that iopub_timeout no longer acts as it used to. In particular, since we aren't waiting for exec reply to fully complete we'll expect to timeout here often. Which means that raise_on_iopub_timeout will fire when users wouldn't expect.

@mpacer I think we should perhaps remove raise_on_iopub_timeout and iopub_timeout with this change and use timeout = self._timeout_with_deadline(1, deadline) for both conditional branches. Those other configurations just don't make any sense any more and are problematic if defined as they were in earlier versions. What's your thoughts? Even if we do a less severe change we'd need to address the change in timeout behavior somehow with this PR merging.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually I tried to preserve that behavior, it won't hit the raise_on_iopub_timeout unless polling_exec_reply is false, and it's only false if we've passed the deadline or if exec_reply is not None (completed).

If we've passed the deadline we hit handle_timeout, which will either interrupt or throw. If it throws it won't reach this section, if it interrupts I suspect we would get an idle message?

Let me know if I got anything wrong here.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ahh yes you're correct, my end-of-conference brain power failed to read properly. Then maybe we should deprecate those fields in the next release (5.5.1) so they can be removed in 6.0? That would help cleanup this code and make it simpler to control in the future imo.

We should add a test for this case if it's not covered yet to ensure we won't timeout unexpectantly. That might be tricky to write with Mocks, so I can probably make that test if it's not clear how to set that up. After that I think I'm good to merge once @mpacer approves / has any further concerns resolved.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey sorry, I was also gone for a week long work even this week.
Looks like I've got more merge conflicts to fix, I'll take a look Monday.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry there's been so many merge conflicts. I've been more careful to review / get more opinions on this PR before merging. Shouldn't be too many more conflicts. I asked @mpacer in person to take a look again when she's available so we don't keep this PR open for much longer.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am not quite following the reasoning behind why the fields need to be deprecated. Could you lay out the situation where these fields cease to make sense?

Copy link
Contributor Author

@djherbis djherbis May 17, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@MSeal probably has a better answer than me, my interpretation was that iopub_timeout exists to give a small window after receiving exec_reply (so we know all output should be produced by then) for the output to reach the iopub_channel (in case of network latency etc.). I think @MSeal might feel that since we are polling for this output more regularly (and not just after exec_reply) that the purpose of this flag has changed somewhat. Perhaps I shouldn't be using iopub_timeout at all unless exec_reply has finished, and use a constant timeout / deadline similar to _poll_for_reply. I could be misinterpreting @MSeal though so please correct me!

Even with all of that, I still think iopub_timeout is relevant post exec_reply being received, since you may still need to wait for output produced near the end of execution to be received.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's pretty much on point. It's odd the to have a post exec_reply response specific timeout in addition to an overall timeout when the two channels are being consumed in parallel.

I'll start a separate PR / Issue on the field independent of this PR to avoid leaving this open longer.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@MSeal Thanks, what do you think about using a constant (like 1s) insead of the iopub_timeout in this PR for the poll to iopub_channel while polling_exec_reply=True, and then switching to iopub_timeout once exec_reply is received?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Possibly that would be a way to approach it. In general I'm saying I think making it simpler and having 1 timeout would reduce confusion and be less nuanced.

more_output = True
polling_exec_reply = True

while more_output or polling_exec_reply:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

more_output being used as a signaling mechanism feels like a pretty significant piece of internal logic. Could you add a comment explaining why we're taking this approach?

I would think a description about the intention of the separate roles of more_output and polling_exec_reply would be enough to address the curious individual.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great point, I added some detailed comments around this section to explain why it's written this way. Let me know if it's still not clear for future readers.

Copy link
Member

@mpacer mpacer left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Overall this looks like a nice set of changes! thank you @djherbis and @MSeal for working through these details.

I asked for some more documentation and have a question about the comment about a deprecation strategy, but I think those could be addressed in later PRs.

If you want to put them in this PR too, I'll give that a review, but otherwise I think this looks good to merge.

@@ -510,6 +515,65 @@ def test_busy_message(self, preprocessor, cell_mock, message_mock):
# Ensure no outputs were generated
assert cell_mock.outputs == []

@prepare_cell_mocks({
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These all look great!

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

All the tests are great work by @MSeal (thank you!) it's my first contribution so I was unfamiliar with how to use these test frameworks.

@MSeal MSeal merged commit 32550d4 into jupyter:master May 17, 2019
@MSeal
Copy link
Contributor

MSeal commented May 17, 2019

@djherbis Ty so much for going through all the bumps with the PR. Really appreciate you driving the change despite the time it took.

@MSeal MSeal added this to the 5.6 milestone Jul 30, 2019
@meeseeksmachine
Copy link

This pull request has been mentioned on Jupyter Community Forum. There might be relevant details there:

https://discourse.jupyter.org/t/nbconvert-5-6-0-release/1867/1

jsvine added a commit to jsvine/nbexec that referenced this pull request Jan 24, 2020
See the following links for details on the issues with IOPub and
nbconvert's ExecutePreprocessor:

- nteract/papermill#426 (comment)
- jupyter/nbconvert#994
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants