-
Notifications
You must be signed in to change notification settings - Fork 946
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Restore widgets in batches to not exceed the zmq high water mark message limit in the kernel #2765
base: main
Are you sure you want to change the base?
Conversation
I put the utility |
…age limit in the kernel. Current ZMQ by default limits the kernel’s iopub message send queue to at most 1000 messages (and the real limit can be much lower, see ZMQ_SNDHWM at http://api.zeromq.org/4-3:zmq-setsockopt). We now request comm state in batches to avoid this limit. See voila-dashboards/voila#534 for more details, including where this was causing real problems.
68e5aa0
to
a507eae
Compare
I have an alternative implementation in mind, I'll open a PR to voila so we can compare/discuss. |
cf8c1be
to
e744957
Compare
I can confirm this is working (classical notebook). I did notice that via notebook, it is much faster than voila (even using your code). |
If there is a problem, we can revert back to our implementation.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Whow, really nice work. The only thing I worry about is that we are going to hit the io rate limit of 1000 msg'es/second. I added some suggestions how to add that (not tested), let me know what you think.
packages/base-manager/src/utils.ts
Outdated
|
||
// Start the first concurrent chunks. Each chunk will automatically start | ||
// the next available chunk when it finishes. | ||
await Promise.all(chunks.splice(0, concurrency).map(f => f())); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Very intersting pattern!
This is the place where we can protect against the io rate limit I think (not tested).
await Promise.all(chunks.splice(0, concurrency).map(f => f())); | |
const concurrency = chunkOptions.rate ?? Infinity; | |
const delay = (sec) => new Promise((resolve) => setTimeout(resolve, sec*1000)); | |
const rateLimit (f) = > Promise.all(f(), delay((concurrency * chunkSize) / rate)).then(v => v[0]); | |
await Promise.all(chunks.splice(0, concurrency).map(f => rateLimit(f())), de); |
packages/base-manager/src/utils.ts
Outdated
* The maximum number of chunks to evaluate simultaneously. Defaults to 1. | ||
*/ | ||
concurrency?: number; | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
} | |
/** | |
* The maximum rate (calls/second) at which we may call fn. Defaults to Infinity. | |
*/ | |
rate?: number; | |
} |
// more details. | ||
const widgets_info = await chunkMap( | ||
Object.keys(comm_ids), | ||
{ chunkSize: 10, concurrency: 10 }, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
{ chunkSize: 10, concurrency: 10 }, | |
{ chunkSize: 10, concurrency: 10, rate: 500 }, |
widgetsnbextension/src/manager.js
Outdated
// https://github.com/voila-dashboards/voila/issues/534 for more details. | ||
return baseManager.chunkMap( | ||
comms, | ||
{ chunkSize: 10, concurrency: 10 }, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
{ chunkSize: 10, concurrency: 10 }, | |
{ chunkSize: 10, concurrency: 10, rate: 500 }, |
@maartenbreddels - testing this out a bit, I tried to see how it affected the data rates. I added this line: import ipywidgets as widgets
widgets.Widget.close_all()
def recursive(n):
if n == 0:
return widgets.Button(description='Hi')
else:
cls = (widgets.VBox if n % 2 else widgets.HBox)
return cls([recursive(n-1), recursive(n-1)])
display(recursive(9))
display(len(widgets.Widget.widgets)) For the initial display, I was seeing rates around 600-700 from the comm open messages from the kernel to the frontend. However, when I refreshed, which triggered the download of all comm info for the notebook with 2k widgets, I was only seeing message rates of 0.3, even when I switched the jlab manager back to using straight What kind of rates do you see? |
FYI, just rebased this PR and pushed to a branch on my fork: https://github.com/jtpio/ipywidgets/tree/limitrequests Let me know if you want to quickly check if the diff looks good and I can force push to this PR: The code snippet posted above slightly changes to take into account the change in #3122: import ipywidgets as widgets
widgets.Widget.close_all()
def recursive(n):
if n == 0:
return widgets.Button(description='Hi')
else:
cls = (widgets.VBox if n % 2 else widgets.HBox)
return cls([recursive(n-1), recursive(n-1)])
display(recursive(9))
display(len(widgets.Widget._active_widgets)) |
Current ZMQ by default limits the kernel’s iopub message send queue to at most 1000 messages (and the real limit can be much lower, see ZMQ_SNDHWM at http://api.zeromq.org/4-3:zmq-setsockopt). We now request comm state in batches to avoid this limit.
See voila-dashboards/voila#534 for more details, including where this was causing real problems.
CC @maartenbreddels