Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improving 4.x nbextensions #878

Closed
ellisonbg opened this issue Dec 17, 2015 · 49 comments
Closed

Improving 4.x nbextensions #878

ellisonbg opened this issue Dec 17, 2015 · 49 comments

Comments

@ellisonbg
Copy link
Contributor

@fperez @damianavila @bollwyvl @sccolbert teoliphant @minrk @takluyver @ijstokes

I have been having a lot of on- and off-line discussion this week about the current state of nbextensions in 4.x. For a long time, we (at least this was my own logic) have hesitated to make any significant changes to how 4.x nbextensions are installed/loaded/packaged, because we know that much bigger changes are on the way in 5.0. After these recent conversations, I am convinced that we need to improve the existing 4.x nbextension architecture in the meantime. The current situation in 4.x is causing way to many problems for users and devs.

This goal of this issue is to 1) raise our community awareness that we need to do something about this and 2) come up with a concrete proposal for moving forward.

Current pain points of 4.x nbextensions

nbextensions can be installed in the following locations:

In [2]: paths.jupyter_path('nbextensions')
Out[2]: 
['/Users/bgranger/Library/Jupyter/nbextensions',
 '/Users/bgranger/anaconda/envs/python34/share/jupyter/nbextensions',
 '/usr/local/share/jupyter/nbextensions',
 '/usr/share/jupyter/nbextensions']

Config can be loaded from the following locations:

In [4]: paths.jupyter_config_path()
Out[4]: 
['/Users/bgranger/.jupyter',
 '/Users/bgranger/anaconda/envs/python34/etc/jupyter',
 '/usr/local/etc/jupyter',
 '/etc/jupyter']

By default, installed extensions are not loaded, until activated. The only place extensions can be activated is in the users config directory (~/.jupyter/nbconfig/notebook|notebook.json):

In [13]: cat /Users/bgranger/.jupyter/nbconfig/notebook.json
{
  "load_extensions": {
    "widgets/notebook/js/extension": true,
    "create_assignment/main": true,
    "nbgrader/create_assignment": true
  }
}

Pain Point #1: even though nbextensions and config can be installed in system, sys.prefix or user paths, the list of extensions to activate is only loaded from the user config. Thus, there is no way of activating nbextensions at the system or sys.prefix level.

  • Example 1a: Because of this a JupyterHub deployment can't enable the various nbgrader extensions at the system level - each user has to do it.
  • Example 1b: Continuum can't enable extensions on a conda env basis, even though they can install them there.

Point Point #2: There is no standard package format for nbextensions (other than a directory of stuff) and no standard way of copying an nbextension into place. Because of this, there are multiple, separate hacky ways of packaging and installing nbextensions.

  • Example 2a: nbgrader has created custom subcommands, such as nbgrader extension install and nbgrader extensions activate.
  • Example 2b: Continuum is starting a couple of new projects to try to solve these issues in the conda context.
  • Example 2c: We ourselves, just gave up and hardcoded the ipywidgets nbextensionss in the frontend code.
  • Example 2d: Projects like https://github.com/takluyver/cite2c have a separate install.py script that installs and activate its extension.
  • Example 2e: @minrk recommends just using ls -n to "install" his nbextensions here: https://github.com/minrk/ipython_extensions

Proposal for addressing these pain points.

Here is the overall approach:

  • Solve the above pain points in a full backwards compatible manner.
  • Release the fixes quickly in a 4.2 release.
  • In 5.x on the exiting pages we have (notebook/tree): formally deprecate the 4.x approach to loading/installing nbextensions, our existing JS APIs and CSS classes.
  • In 5.x on the new JupyterLab page: only use new APIs and the new npm based plugin approach.

Here are the technical details of the proposal:

Pain Point 1

@takluyver has made an excellent point that the "nbconfig" frontend configuration system deliberately only loads from the users config (~/.jupyter) because these are really mean to only be "user preferences". I completely agree with this and others I have spoken to also concur. So we can't make that part of our app start to load system or sys.prefix based config.

I propose to start to load nbextension activation config from all config paths (system, sys.prefix and user) always, but do that instead by throwing that data into the page.html template itself, rather than loading later using the nbconfig web service. This would allow us to keep system data out of the user level app-preferences but still load config for nbextensions from all locations. This is not difficult to implement and is fully backwards compatible.

Pain Point 2

This one is more difficult and my proposal will be more controversial. Today, in many cases, people are shipping JS code in Python packages. For 5.0 we are going to stop doing that and embrace npm as our package format and manager, but that would require breaking changes in 4.x, so is not on the table.

I propose, that for 4.2+ we embrace putting nbextensions into Python packages:

  • Don't standardize specifically where in the python package they are. This allows existing packages such as nbgrader to not have to change anything.
  • Instead, 1) provide metadata in setup.py that gives the package relative paths of those assets and 2) provide that same metadata in __init__.py to enable runtime inspection. Something like this:
>>> import nbgrader
>>> nbgrader._jupyter_nbextension_paths
['nbextensions/assignment_list', 'nbextensions/create_assigment']
  • Using this simple convention, we could then improve the existing jupyter nbextension command line tool to work with python packages: jupyter nbextension install nbgrader and jupyter nbextension enable nbgrader. We could also include flags the target installation/config to the system, sys.prefix and user directories. This would again be fully backwards compatible.
  • Using this simple convension, Continuum can build conda installers for nbextensions with almost no effort.
@SylvainCorlay
Copy link
Member

Another pain point, which I don't know if it could be addressed for the 4.x series is the fact that certain nbextensions depend on (a certain version) kernel-side code such as custom interactive widgets libraries.

At the moment, it is impossible to have two kernels with two different versions of ipywidgets, or bqplot, installed because they will look for the JavaScript at the same location.

Therefore, I think that the extension mechanism should acknowledge that there are two categories of extensions:

  1. Global notebook application extensions, which is not dependent on kernel code (like a spell-check).
  2. Kernel-dependent extensions.

A proposal to solve this would be to have the kernelspec contain one more information: a uuid which would typically be generated when the kernelspec is installed. When running a notebook with this kernel, nbextension_base_path/u-u-i-d/ would then become a search path, and the one we use for custom widgets.

cc @jdfreder

@ellisonbg
Copy link
Contributor Author

Here is a super rough draft of a PR that addresses the first pain point: #879

Some questions that this bring up:

  • Do we need separate config=True traitlets on NotebookApp for enabling common, notebook, tree and terminal nbextensions?
  • What are they named?
  • How do we handle multiple config files that define those? Last wins? Try to merge?
  • Our .py config files allow lists to be appended/extended, but .json doesn't.
  • How will folks at Continuum write these config files programatically?

There are some design decisions to make, but the good news is that it is simple code and logic to add this. Let's discuss more tomorrow.

@Carreau
Copy link
Member

Carreau commented Dec 17, 2015

Release the fixes quickly in a 4.2 release.

I don't agree with "quickly". I would prefer "thoroughly tested" with "with good documentation and example".

Don't standardize specifically where in the python package they are. This allows existing packages such as nbgrader to not have to change anything.

I would still like if you make extensions in python packages to be able to activate them per submodule: jupyter activate nbgrader.student / jupyter activate nbgrader.teacher

throwing that data into the page.html template itself

Does that implicate you need to restart the server on loading extensions ?

@Carreau
Copy link
Member

Carreau commented Dec 17, 2015

Extra note, if the packages are Python and have versions, we should be able to
serve multiple versions at differents url like /nbextension/<extension>/<version>/, if version is omitted latest is implied.

@minrk
Copy link
Member

minrk commented Dec 17, 2015

Re: @SylvainCorlay's point, also during that discussion, we proposed a kernel-specific nbextension path, which we never got around to implementing. I feel bad about that. I think we ended up proposing an nbextensions dir inside the kernelspec that kernel-specific extensions could go in.

It seems a bit worrisome that we would officially bless Python packages as the temporary solution for 4.x nbextensions, and then turn 180º and say that it's all npm, and not Python packages as fast as we can. But if we want to make it more convenient, adding a flag for 'install js from a Python package':

jupyter nbextension install --py nbgrader

seems like a better middle ground than breaking jupyter nbextension install [path], or guessing whether [path] is a path or a package.

Do we need separate config=True traitlets on NotebookApp for enabling common, notebook, tree and terminal nbextensions?

I don't think so. What would these common nbextensions be?

How do we handle multiple config files that define those? Last wins? Try to merge?

I would do it the same way we do with the rest of config, where the more specific config has priority: user > env > system. I think that's the only thing that's missing from nbconfig for this to work.

How will folks at Continuum write these config files programmatically?

During the nbextension discussion a year ago, it was concluded that we must not activate extensions as part of installing them - that install & activate must be two separate actions. Are we changing our minds on that?

If someone wants to enable nbextensions via conda packages, the biggest hurdle is that all enabling currently resides in a single file, and conda packages should only write, not modify files. To support this, the only idea I have is a config.d-style directory of config files, so that packages could drop a file in there, and all such files are loaded.

@Carreau
Copy link
Member

Carreau commented Dec 17, 2015

Re: @SylvainCorlay's point, also during that discussion, we proposed a kernel-specific nbextension path, which we never got around to implementing.

You can load extension from kernelspec directory using kernel.js, you just need to do relative requires.

It seems a bit worrisome that we would officially bless Python packages as the temporary solution for 4.x nbextensions, and then turn 180º and say that it's all npm, and not Python packages as fast as we can. But if we want to make it more convenient, adding a flag for 'install js from a Python package':

What about "just" adding npm global dir to the search path ?

I would do it the same way we do with the rest of config, where the more specific config has priority: user > env > system. I think that's the only thing that's missing from nbconfig for this to work.

For general frontend config, it's hard to merge JSON, though for list of extension, that might be easier.

@Carreau
Copy link
Member

Carreau commented Dec 17, 2015

that install & activate must be two separate actions. Are we changing our minds on that?

I don't think so, I think the setup.py list is to discover extensions.

@Carreau Carreau added this to the 4.2 milestone Dec 17, 2015
@jankatins
Copy link
Contributor

Just that I understand it correctly:

This:

embrace npm as our package format and manager

just means that developers of packages need to use npm, not that users of the package need to install npm in addition to python+python package manager?

If not: I don't think it's good for the adoption of extensions if e.g. a julia user has to install a python environment to get a python package manager to install a jupyter notebook and then start installing node and npm to get extensions. Learning python (the language, the tools, the libs...) instead of using a preinstalled SPSS/STATA/... is already hard for students and researchers, so don't add on learning another packaging ecosystem to get notebook extensions.

@parente
Copy link
Member

parente commented Dec 17, 2015

Maybe this belongs in a separate issue, but it seems like the right time to fix it along with these other problems. Installing server-side extensions suffers from a similar pain point about jupyter_notebook_config.py vs jupyter_notebook_config.json configs.

We found out the hard way that the JSON config takes precedence over the Python config in jupyter/dashboards#153. So if some server-side extensions ask users to add themselves to the .py config like c.NotebookApp.server_extensions = ['my.extension'] (currently: nbexamples, dashboards, declarativewidgets) while others use the ConfigManager class to add themselves to the .json config (nbgrader, anything using the nbsetuptools), only the .json ones will wind up getting enabled.

I have no problem switching over to one or the other, but, which is the correct one? Or does something need to change so that both are supported and the lists of extensions are merged across the config types?

@minrk
Copy link
Member

minrk commented Dec 17, 2015

I have no problem switching over to one or the other, but, which is the correct one?

It's probably correct for .py config files to have higher priority, since they are generally human-edited and more powerful, while .json config files should always and only be programmatically edited. .py files can be considered 'manual overrides'.

In Python, it makes sense to do c.NotebookApp.server_extensions.append('my.extension'). The JSON config files cannot do this, since they are a simple dict-dump. And further, they don't generally need to, because opening and appending to a list in a JSON file is doable, unlike in Python. A further point in favor of .py files having higher priority.

@Carreau
Copy link
Member

Carreau commented Dec 17, 2015

In Python, it makes sense to do c.NotebookApp.server_extensions.append('my.extension'). The JSON config files cannot do this, since they are a simple dict-dump. And further, they don't generally need to, because opening and appending to a list in a JSON file is doable, unlike in Python. A further point in favor of .py files having higher priority.

We already had a long discussion on which one between py an json should take precedence and decided on Json, as we want at some point to edit configuration only through UI. Having .py taking precedence would mean in many cases that if a user does: c.NotebookApp.server_extensions = ['my.extension'] no automatic tool can ever activate an extension. So which one takes precedence is still not obvious to me, and I'm not sure a single case like this one change the balance.

Json could perfectly have append also, as long as we decide that an append or prepend keys become a LazyAppend/LazyPrepend.

We maybe should just warn more loudly if one file erase the settings of another, or put some time in looking at tools like redbaron to modify a Python config file in obvious case when a config option is not dynamic.

@parente
Copy link
Member

parente commented Dec 17, 2015

We maybe should just warn more loudly if one file erase the settings of another ...

The warning is definitely there and pretty loud on server start:

[W 14:29:11.533 NotebookApp] Unrecognized JSON config file version, assuming version 1
[W 14:29:11.536 NotebookApp] Collisions detected in jupyter_notebook_config.py and jupyter_notebook_config.json config files. jupyter_notebook_config.json has higher priority: {
      "NotebookApp": {
        "server_extensions": "<traitlets.config.loader.LazyConfigValue object at 0x7f15cc47dc18> ignored, using ['urth.dashboard.nbexts']"
      }
    }

But I'll admit @jtyberg and I missed it for some time when debugging the issue I linked above. Even when we did see it, we only knew what to do because we're expert-amateurs in how the Jupyter config system works. I'd bet a typical notebook user wouldn't have an easy time rectifying the problem.

Json could perfectly have append also, as long as we decide that an append or prepend keys become a LazyAppend/LazyPrepend.

If you wanted to scope it down to prepending/appending sequence values instead of having to merge arbitrary config objects, that would still solve the extension problem specifically without growing into a general purpose "merge all the configs!" solution.

@ijstokes
Copy link

What is the non-Continuum-camp feeling about the importance of being able to support "sandboxing" of extensions? For us (at Continuum) that means having a clean mechanism to be able to use conda environments to sandbox different sets of extensions and perhaps different versions of the same extension. From our work which has mixed both JS and Python code in the extensions conda packaging and conda environments have worked really well.

I'm not clear on how this works in either a pure-Python-package or pure-NPM-package world. If there is a good and clean solution that doesn't include conda, great. But is there scope to discuss how conda can fill the need here to manage cross-language packaging, or has that ship already sailed?

@minrk
Copy link
Member

minrk commented Dec 17, 2015

We maybe should just warn more loudly if one file erase the settings of another, or put some time in looking at tools like redbaron to modify a Python config file in obvious case when a config option is not dynamic.

If redbaron would allow us to write and update Python config files programmatically, I would be pretty happy to drop json config files altogether, since they would no longer solve a problem. But that's a long-term idea.

@minrk
Copy link
Member

minrk commented Dec 17, 2015

But is there scope to discuss how conda can fill the need here to manage cross-language packaging, or has that ship already sailed?

We can never rely on conda for this, so it's always going to be the case for us to use standard language packaging in multiple languages with some manual steps for stitching the two together, and then it can be 'simpler' for conda users, where packages can properly express cross-language dependencies. But whatever we come up with, it has to work outside conda.

@SylvainCorlay
Copy link
Member

Re: @SylvainCorlay's point, also during that discussion, we proposed a kernel-specific nbextension path, which we never got around to implementing. I feel bad about that. I think we ended up proposing an nbextensions dir inside the kernelspec that kernel-specific extensions could go in.

I would be ok to give it a try if you guys are ok with the proposal described earlier.

@ellisonbg
Copy link
Contributor Author

How is everyone's availability for a video chat later today? I am free after 10 am PST.

@sccolbert
Copy link
Contributor

I'm free any time.

@ijstokes
Copy link

11am PT, 2pm ET would be good for me or
12pm PT, 3pm ET

An our later is "possible" but sub-optimal. Friday AM-midday ET also WFM. Afternoon not so great.

@Carreau
Copy link
Member

Carreau commented Dec 17, 2015

I probably can make it too.

@damianavila
Copy link
Member

If I have to choose I would do it tomorrow, so we give another 24 hs to the things raised here (in other discussion and the PR itself), but if we are doing it today, I would prefer around/after 12pm PT.

@SylvainCorlay
Copy link
Member

Is there a hackpad for this meeting?

@parente
Copy link
Member

parente commented Dec 17, 2015

Friday would be better here too if possible.

/cc @lbustelo

@ellisonbg
Copy link
Contributor Author

I like Damian's idea of having the meeting tomorrow. That will give me
today to work on the implementation and allow more discussion.n We also may
get more turn out with the extra day notice.

I propose 10am PST on appear.in or bluejeans to allow our "further east"
folks to participate more easily. How does that sound?

On Thu, Dec 17, 2015 at 9:27 AM, Peter Parente [email protected]
wrote:

Friday would be better here too if possible.


Reply to this email directly or view it on GitHub
#878 (comment).

Brian E. Granger
Associate Professor of Physics and Data Science
Cal Poly State University, San Luis Obispo
@ellisonbg on Twitter and GitHub
[email protected] and [email protected]

@blink1073
Copy link
Contributor

I will be unavailable for tomorrow, but would like to suggest the model used by git for chaining configuration files together:

"If not set explicitly with --file, there are four files where git config will search for configuration options:

$(prefix)/etc/gitconfig
System-wide configuration file.

$XDG_CONFIG_HOME/git/config
Second user-specific configuration file. If $XDG_CONFIG_HOME is not set or empty,
$HOME/.config/git/config will be used. Any single-valued variable set in this file will be overwritten by
whatever is in ~/.gitconfig. It is a good idea not to create this file if you sometimes use older versions of Git, as support for this file was added fairly recently.

~/.gitconfig
User-specific configuration file. Also called "global" configuration file.

$GIT_DIR/config
Repository specific configuration file.

If no further options are given, all reading options will read all of these files that are available.
If the global or the system-wide configuration file are not available they will be ignored.
If the repository configuration file is not available or readable, git config will exit with a non-zero
error code. However, in neither case will an error message be issued.

The files are read in the order given above, with last value found taking precedence over values
read earlier. When multiple values are taken then all values of a key from all files will be used.

All writing options will per default write to the repository specific configuration file. Note that this
also affects options like --replace-all and --unset. git config will only ever change one file at a time.

You can override these rules either by command-line options or by environment variables.
The --global and the --system options will limit the file used to the global or system-wide file respectively.
The GIT_CONFIG environment variable has a similar effect, but you can specify any filename you want."

https://www.kernel.org/pub/software/scm/git/docs/git-config.html

@Carreau
Copy link
Member

Carreau commented Dec 17, 2015

We can also push that a few days down the road, I'm not sure there is an urgent need to do that now, as 4.1 is not yet out.

@bollwyvl
Copy link
Contributor

I'll try to make it, but have a company event.

So we're going to use npm Real Soon Now, but need to at least work with virtualenv/pip and conda now. For the end user, that's all they want to do, or maybe some GUI stuff, not touch magic named files in five places or run a bunch of CLI.

Let's also try to make developers happy, and ease them into the npm world (lotta bower out there right now).

And let's try to make sysadmins happy. A surrogate for this is: how simply can I use binder to make a demo of my software? If you are doing a py/js job and requirements.txt or environment.yml isn't enough and you need a Dockerfile, it's too hard.

I am 👍 allowing enabling extensions from all the places, as suggested... and would like to see this wrapped as a switch to nbextension install, even if it its included elsewhere. I would prefer to see a data-first approach, have never liked the python config files. JSON is validatable, and has implementations everywhere.

As to the chaining: I almost would rather see the behavior like lodash's _.merge, but the lists are still a problem. Germaine, here: server_extensions should really be a hash :) Then, you could opt out into or out of an extension by setting it to True/False

The version stuff is scary, and treading on what a real package manager should do, but if it's an identified need, we should go ahead and address it.

So, with the those assumptions and:

  • you have a real setup.py package that has already been installed
  • somewhere in that package, you have a folder which contains:
    • package.json...
      • with at least name, version and main
    • and __init__.py so that python can find it (setuptools.path:style)

...changing the command to make the positional argument optional, and accept --py:

$> jupyter nbextension install --prefix="${CONDA_ENV_PATH}" \
  --py=nbgrader.nbextensions.assignment_list.static \
  --enable

Installing and enabling nbextension to `{prefix}` from python package `nbgrader`...
... found package.json
  ... found name `nbgrader.assignment_list`
  ... found version `0.2.0`
    ... creating `[email protected]`
       ... `0.2.0` is newer than previous version in `nbgrader.assignment_list` (`0.1.0`)
         ... removing `nbgrader.assignment_list`
         ... copying `[email protected]` to `nbgrader.assignment_list`   
  ... found main `main.js`
    ... `nbgrader.assignment_list/main` already enabled

Speaking of "standard package management"... being able to communicate all of that at install time with setuptools entry_points would be an option:

#setup.py
setup(
    #...
    entry_points={
        'jupyter.nbextension': [
            'nbgrader.assignment_list = nbgrader.nbextensions.assignment_list.static',
        ],
        'jupyter.server_extension': [
            'nbgrader = nbgrader.nbextensions.assignment_list:load_jupyter_server_extension',
        ],
        #...
)

...but there are probably complexities i am missing: environments (should be detectable), hub deployments, etc. I would really rather see the config files as a way to opt out of or customize what happens at the package manager level, rather than a necessary step for every single installed package.

@fperez
Copy link
Member

fperez commented Dec 18, 2015

For reference, notes of the Dec. 18 meeting are in this hackpad.

@ellisonbg
Copy link
Contributor Author

For the record, the proposal that this mornings meeting came to was:

  • Continue to load the list of nbextensions to load from the frontend nbconfig.
  • In the jupyter nbextension subcommands, be able to read/write to all config directory locations.
  • In the JS API, load from all config directories, but only write to the user config location.
  • Continue to use the _jupyter_nbextensions_paths() API to find nbextensions in Python packages.
  • Continue to rely on editing config files.

@fperez
Copy link
Member

fperez commented Dec 19, 2015

Mmh, I thought we'd agreed to go to the conf.d bag-o-files model instead of asking package managers to edit config files... Did I miss something?

@ellisonbg
Copy link
Contributor Author

I don't think we can do that in our current approach without breaking BW compat:

  • Our nbconfig system already edits config files in response to REST API calls to the frontend config service.
  • The existing 4.0 jupyter nbextension command already makes edits to those same files.

The only thing we could do is to add another separate layer that does the conf.d approach. But it would have to sit along side our existing stuff and not replace it. I think we should use the current edit based approach for now and see if we want to add the conf.d approach later.

@fperez
Copy link
Member

fperez commented Dec 19, 2015

Well, if we can offer for 4.2 the conf.d option, conda/apt/etc might choose that option instead, without breaking BW compatibility (I'm not suggesting removing the existing functionality).

I think for example apt has post-install scripts, but the policy is that they shouldn't modify the files that they installed, only do other things.

So I'd like to see if we can go in the conf.d direction as overall I think it's the saner option moving forward and towards 5.x, even if we do carry the existing solution for the 4.x lifecycle as well.

@ellisonbg
Copy link
Contributor Author

I agree that in the long run we want to have the conf.d type of approach as
well.

On Fri, Dec 18, 2015 at 5:17 PM, Fernando Perez [email protected]
wrote:

Well, if we can offer for 4.2 the conf.d option, conda/apt/etc might
choose that option instead, without breaking BW compatibility (I'm not
suggesting removing the existing functionality).

I think for example apt has post-install scripts, but the policy is that
they shouldn't modify the files that they installed, only do other things.

So I'd like to see if we can go in the conf.d direction as overall I
think it's the saner option moving forward and towards 5.x, even if we do
carry the existing solution for the 4.x lifecycle as well.


Reply to this email directly or view it on GitHub
#878 (comment).

Brian E. Granger
Associate Professor of Physics and Data Science
Cal Poly State University, San Luis Obispo
@ellisonbg on Twitter and GitHub
[email protected] and [email protected]

@mdboom
Copy link
Contributor

mdboom commented Dec 30, 2015

I'm jumping into this discussion awfully late, coming over from @blink1073's great work in matplotlib/matplotlib#5754 to make interactive matplotlib plots a proper Jupyter widget.

My concern there is that the matplotlib widget fits very squarely into @SylvainCorlay's second category of "kernel dependent extensions". The coupling between the Javascript and Python side there is very tight, and as that stuff has developed it's almost never been the case that a new feature could be added or a bug fixed without changing both sides of the coin. It will be very important to the matplotlib widgets that there are no opportunities for a version mismatch between the Python and Javascript sides of the communication. It looks on first blush that #879 will address that -- matplotlib can create a NotebookApp subclass and specify where its Javascript lives (which could continue to be installed along with matplotlib in Python with Python-packaging tools etc. as it is now). Is my impression correct? And will that remain in the longer term plans?

@ellisonbg
Copy link
Contributor Author

Mike, we are still working out the details, but we expect the situation to
only improve. I don't think you will end up creating a NotebookApp subclass
though.

With 4.x nbextensions, most projects like matplotlib will ship their JS in
the Python package and that JS code will be installed and activated as a
separate step by the user (we are improving that situation).

Starting with 5.x we will start to rely on npm for a lot of this, but will
still have a similar process that allows the version numbers to be synched
in python and JS. Eventually you will probably want to separate out your JS
code into an npm package that the python package also includes in its
sources.

Cheers,

Brian

On Wed, Dec 30, 2015 at 9:40 AM, Michael Droettboom <
[email protected]> wrote:

I'm jumping into this discussion awfully late, coming over from @blink1073
https://github.com/blink1073's great work in matplotlib/matplotlib#5754
matplotlib/matplotlib#5754 to make interactive
matplotlib plots a proper Jupyter widget.

My concern there is that the matplotlib widget fits very squarely into
@SylvainCorlay https://github.com/SylvainCorlay's second category of "kernel
dependent extensions
#878 (comment)".
The coupling between the Javascript and Python side there is very tight,
and as that stuff has developed it's almost never been the case that a new
feature could be added or a bug fixed without changing both sides of the
coin. It will be very important to the matplotlib widgets that there are no
opportunities for a version mismatch between the Python and Javascript
sides of the communication. It looks on first blush that #879
#879 wil l addres s that --
matplotlib can create a NotebookApp subclass and specify where its
Javascript lives (which could continue to be installed along with
matplotlib in Python with Python-packaging tools etc. as it is now). Is my
impression correct? And will that remain in the longer term plans?


Reply to this email directly or view it on GitHub
#878 (comment).

Brian E. Granger
Associate Professor of Physics and Data Science
Cal Poly State University, San Luis Obispo
@ellisonbg on Twitter and GitHub
[email protected] and [email protected]

@mdboom
Copy link
Contributor

mdboom commented Jan 4, 2016

Starting with 5.x we will start to rely on npm for a lot of this, but will still have a similar process that allows the version numbers to be synched in python and JS. Eventually you will probably want to separate out your JS code into an npm package that the python package also includes in its sources.

The versioning here is critical obviously, and should prevent version mismatch problems, but it won't prevent problems with stale packages etc. On a purely logical level, it seems to me that code that is tightly coupled, and that couldn't exist without both client and server sides, should be installed in an atomic way. The piece I'm probably missing is what an npm package/installation provides over libraries in the kernel providing/pointing to their own resources (for the case of "kernel-specific extensions" -- it makes sense for primarily client-side extensions).

@sccolbert
Copy link
Contributor

@mdboom Among other things, it prevents the duplicate loading of dependencies. Some client-side libs will not function correctly if they are loaded multiple times. It also makes it much simpler to specify those client-side dependencies.

@mdboom
Copy link
Contributor

mdboom commented Jan 6, 2016

@mdboom Among other things, it prevents the duplicate loading of dependencies. Some client-side libs will not function correctly if they are loaded multiple times. It also makes it much simpler to specify those client-side dependencies.

But how does any of that apply to @SylvainCorlay's second category above (kernel-dependent extensions)?

@sccolbert
Copy link
Contributor

Kernel dependent extensions can provide their own list of JS dependencies, which get unioned with the front-end dependencies of the rest of the app.

@mdboom
Copy link
Contributor

mdboom commented Jan 6, 2016

If I'm understanding correctly, there appears to be no way to install the Javascript library atomically along with the kernel-side library under the proposed scheme. That is my fundamental objection with the design here. The strong versioning goes a long way, but not all the way, toward ameliorating some of the problems with that. The necessities/advantages of installing javascript content through npm seems to only apply to primarily client side extensions, and cut against the needs of kernel-dependent extensions, where atomic installation is important.

All that said, I'll apologize again for coming late to this discussion. Certainly from matplotlib's perspective, this seems like a regression for both users (which have an additional manual installation step and the possibility for confusion and more toes to shoot oneself in) and developers (that must release packages on two different package frameworks, where before there was one), with little benefit in our particular use case. But obviously, we are but one case and will go with the flow if the benefits for other kinds of extensions outweigh the disadvantages for our kind.

@ellisonbg
Copy link
Contributor Author

With nbextensions 4.x, there isn't a really reliable way of doing what you
are asking. This is mainly because nbextensions are not versioned in any
way.

With the new npm based 5.x approach, code in the kernel or server will be
able to specify which version of an npm package they require. A user might
have 5 different versions of that package installed, but we (and npm) will
make sure that only the needed one gets loaded. This allows you to have
separate installation of the Python and JS side, while still making sure
versions match.

On Wed, Jan 6, 2016 at 7:54 AM, Michael Droettboom <[email protected]

wrote:

If I'm understanding correctly, there appears to be no way to install the
Javascript library atomically along with the kernel-side library under the
proposed scheme. That is my fundamental objection with the design here. The
strong versioning goes a long way, but not all the way, toward ameliorating
some of the problems with that. The necessities/advantages of installing
javascript content through npm seems to only apply to primarily client side
extensions, and cut against the needs of kernel-dependent extensions, where
atomic installation is important.

All that said, I'll apologize again for coming late to this discussion.
Certainly from matplotlib's perspective, this seems like a regression for
both users (which have an additional manual installation step and the
possibility for confusion and more toes to shoot oneself in) and developers
(that must release packages on two different package frameworks, where
before there was one), with little benefit in our particular use case.
But obviously, we are but one case and will go with the flow if the
benefits for other kinds of extensions outweigh the disadvantages for our
kind.


Reply to this email directly or view it on GitHub
#878 (comment).

Brian E. Granger
Associate Professor of Physics and Data Science
Cal Poly State University, San Luis Obispo
@ellisonbg on Twitter and GitHub
[email protected] and [email protected]

@jdemeyer
Copy link
Contributor

Jupyter kernels have some of the same issues, see jupyter/jupyter_core#61

You want to consider one solution fixing both nbextensions and kernels specs at the same time.

@haobibo
Copy link
Contributor

haobibo commented Feb 18, 2016

@damianavila , @parente and maybe more:

Just want to know is there any decisions made about the extensions (including both front-end and server-side) installation and management mechanism.

As you may have known, there is a popular project (https://github.com/ipython-contrib/IPython-notebook-extensions) hosting many notebook extensions ( both front-end and server-side ), including an extensions nbextension provides a web UI to manage front-end extensions.
@juhasch and other developer have made that project pip-install-able, but the current installation mechanism is sort of un-pythonic or un-notebook-nic. Since we already have nbextensions.py to install front-end extensions.
Actually, extensions usually have three parts (nbextensions for front-end, extensions for python files and templates for html templates), I wonder if we can modify nbextensions.py or adopt designs in ipython-contrib/IPython-notebook-extensions to support server-side extensions (the extensions and templates folder) in recent release?

Besides, I also think, for Jupyter Notebook end user (not developers), just using command line jupyter nbextension install <extension path or url> is a good way to install extension.

@damianavila
Copy link
Member

@haobibo there were several discussions about this and we have now a clear picture about the things needed and how to implemented. In fact, @ellisonbg started a PR for this and I am working in some other branches to complete that PR and add additional missing pieces (for instance, a way to enable/disable server based extensions ala nbextension).

I should be easy to adapt the ipython-notebook-extensions into the new mechanism once that is finally merged. I would encourage you to try port some of the extension with the current proposed implementation (#879) to see if there is something we are missing.

@ellisonbg
Copy link
Contributor Author

Damian, please coordinate with @jdfreder - he is also working on finishing
up the PR as well.

On Thu, Feb 18, 2016 at 6:18 AM, Damián Avila [email protected]
wrote:

@haobibo https://github.com/haobibo there were several discussions
about this and we have now a clear picture about the things needed and how
to implemented. In fact, @ellisonbg https://github.com/ellisonbg
started a PR for this and I am working in some other branches to complete
that PR and add additional missing pieces (for instance, a way to
enable/disable server based extensions ala nbextension).

I should be easy to adapt the ipython-notebook-extensions into the new
mechanism once that is finally merged. I would encourage you to try port
some of the extension with the current proposed implementation (#879
#879) to see if there is
something we are missing.


Reply to this email directly or view it on GitHub
#878 (comment).

Brian E. Granger
Associate Professor of Physics and Data Science
Cal Poly State University, San Luis Obispo
@ellisonbg on Twitter and GitHub
[email protected] and [email protected]

@damianavila
Copy link
Member

@ellisonbg Nice, I will ping him. Thanks for letting me know 👍

@minrk minrk modified the milestones: 4.2, 4.3 Apr 7, 2016
@damianavila
Copy link
Member

Since #879 was merged, I think this should be closed. There are other interesting discussions here but the thread is long and they get easily missed. I would have further discussions in new threads. Thoughts?

@minrk minrk modified the milestones: 4.2, 4.3 Apr 11, 2016
@minrk minrk closed this as completed Apr 11, 2016
@github-actions github-actions bot locked as resolved and limited conversation to collaborators Apr 27, 2021
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests