-
Notifications
You must be signed in to change notification settings - Fork 4.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Question: How do I profile performance bottlenecks? #191
Labels
Milestone
Comments
Depending on what you are doing, mostly standard Python profiling tools should work, such as the stdlib profile module, and line_profiler. Some operations may cause hanging in the browser (e.g. SVG plots with too many elements), in which case using the browser's own profiler tools is a good idea. |
Closing as nothing to do here. |
kevin-bates
added a commit
to kevin-bates/notebook
that referenced
this issue
Mar 25, 2020
This commit uses the approach used in jupyter_server jupyter#191 first proposed by David Brochart. This reduces code duplication and alleviates redundancy relative to configurable options. Also, the startup message now includes the version information. Co-authored-by: David Brochart <[email protected]>
kevin-bates
added a commit
to kevin-bates/notebook
that referenced
this issue
Mar 27, 2020
This commit uses the approach used in jupyter_server jupyter#191 first proposed by David Brochart. This reduces code duplication and alleviates redundancy relative to configurable options. Also, the startup message now includes the version information. Co-authored-by: David Brochart <[email protected]>
toonijn
pushed a commit
to toonijn/notebook
that referenced
this issue
Apr 9, 2020
This commit uses the approach used in jupyter_server jupyter#191 first proposed by David Brochart. This reduces code duplication and alleviates redundancy relative to configurable options. Also, the startup message now includes the version information. Co-authored-by: David Brochart <[email protected]>
Sign up for free
to subscribe to this conversation on GitHub.
Already have an account?
Sign in.
Hello Jupyter community,
apologies for asking this here. I've been trying to find some information about performance profiling online but my googling skills have failed me.
I use the Jupyter notebook daily to plot and analyse timeseries (mainly using seaborn and pandas).
Sometimes when the number of datapoints exceeds a certain amount, the kernel starts to hang (10+ minutes) or dies.
Have there been any efforts to document the performance of the kernel for large scale data visualization/analysis? I would like to establish some guidelines to help me decide when plotting something using the notebook is not a good idea.
The text was updated successfully, but these errors were encountered: