You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am developing the literary library that enables users to develop Python packages/modules using Jupyter Notebooks. It aims to facilitate a kind of literate programming (see philosophy and why). When the developed package is built into a wheel, it's ultimately a series of .py files on disk, but during developing I'm using an import hook to enable import my_notebook.
There is a one-way transformation between notebook and source code that drives both package generation and the import hook. It looks a little like this:
notebook -> nbconvert -> AST transform
My understanding is that jupyterlab-lsp builds a virtual-document representation of the notebook, and serves that to the language server. I am not familiar enough with the LSP and its implementations to determine how to proceed. I want to enable the intelligent refactoring tools that leverage a relationship between symbol and notebook cells, but in this instance, for symbols imported from different notebooks. These symbols are not just a simple concatenation of cells; the final source generated for the imported module is transformed as aforementioned.
I think this is two separate issues:
Supporting import hooks in jedi
Supporting mangled notebooks in the frontend
In short, I want to be enable the language-server to resolve notebook imports (by loading their generated source) to find symbols, and I want the frontend to be able to jump-to-definition into these notebooks.
The text was updated successfully, but these errors were encountered:
Thank you for bringing this up! Linking back to our earlier gitter discussion for reference in the future.
It is good to discuss it here so we can think of a solution that would work with any language server (and not a single one only). I am leaning towards having a source transformer as a server extension in the future to allow for similar use-cases (not saying that it will replace the one that is on the frontend). This would also allow jupyter-lsp to be more easily used by other clients like ntreact or other IDEs.
This issue might be similar to #28.
I am developing the literary library that enables users to develop Python packages/modules using Jupyter Notebooks. It aims to facilitate a kind of literate programming (see philosophy and why). When the developed package is built into a wheel, it's ultimately a series of .py files on disk, but during developing I'm using an import hook to enable
import my_notebook
.There is a one-way transformation between notebook and source code that drives both package generation and the import hook. It looks a little like this:
notebook -> nbconvert -> AST transform
My understanding is that
jupyterlab-lsp
builds a virtual-document representation of the notebook, and serves that to the language server. I am not familiar enough with the LSP and its implementations to determine how to proceed. I want to enable the intelligent refactoring tools that leverage a relationship between symbol and notebook cells, but in this instance, for symbols imported from different notebooks. These symbols are not just a simple concatenation of cells; the final source generated for the imported module is transformed as aforementioned.I think this is two separate issues:
In short, I want to be enable the language-server to resolve notebook imports (by loading their generated source) to find symbols, and I want the frontend to be able to jump-to-definition into these notebooks.
The text was updated successfully, but these errors were encountered: