-
Notifications
You must be signed in to change notification settings - Fork 20
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Splitting of language dictionary into packages #69
Comments
Yes, jupyterlab-lsp uses a monorepo like that. However I would stop for a moment and think about alternatives. To improve on the current situation we can either:
Using a dictionary from the web is independent of these strategies (but in the first one we can store it on the disk to get better load times). Overall I believe we should have a server extension rather than a manager on the frontend. However, if you already started re-structuring the code to use |
See #49. My point is that splitting things on the frontend is not sufficient to solve the issues we face and having a server extension is the better way. I am happy to help with this. |
Oh, I'm not far in the implementation. I was thinking of a server extension as well. For me it is not really clear how we should distribute at least some of the main languages in a pip/conda setup. I mean where is the place in the jupyterlab structure where the user can store the data? If we can create a fixed directory which can be access by the server extension and which we can fill with pip packages, this is working. |
That's great! Happy to take a look at some time.
I do not remember the exact path at the moment, but I believe it would be best answered by @goanpeca |
Okay, then I will look at the server extension. Following the examples it is not very complicated. Developing web APIs is not unknown to me ;-) |
Okay, I've implemented a solution with a server extension and a language manager in the frontend. It is working fine so far. Inside the jupyter environment, there are data-paths: jupyter --path The thing missing is, how to distribute the dictionaries inside some packages. Should I create a PR for the code changing and you (@krassowski)can setup the dictionary distribution? |
Yes, please create a PR and I will happily look into moving this forward. I should have some time this weekend. |
I'm currently working on a splitting strategy of the dictionaries. It also includes the possibility of custom dictionaries loaded from the internet or locally. The idea is to implement an ILanguageManager service as a token package which then is exported to other packages as well. Then the spellchecker-package can select from the list of registered languages. The language items are stored as webpack links but e.g. a custom package can also provide a web address (user configured) for loading. This will also solve #66 .
At the moment I have understood the logic of the token package, but the problem is, that for such a token package and also following new packages we need an update of the structure. I suggest to create a package folder in which we can put all packages necessary. As fas as I understood with the workspace item in the package.json of the main directory one can trigger all compilations and packaging of all packages at the same time. In my test this is working, however I faced a problem that my test icon package is distributed and initialised as wished by the spellchecker-extension but a demo package which should use the token is not working. It is linking the token package description and for some reason it will try to use the copy of the token package which is obviously not initialised by any process... the console is claiming to access an unprovided structure ... I've looked in the repo of jupyterlab-topbar which is doing the same thing and this is working. Anyway I guess the implementation path is okay, so we can start step by step converting the repo into the package thing and then try to split the dictionaries into individual packages.
Do you have experiences in such multiple package repo for jupyterlab?
The text was updated successfully, but these errors were encountered: