-
-
Notifications
You must be signed in to change notification settings - Fork 507
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Create a Database Index for Jedi #1059
Comments
Recently I wrote my own source indexer using jedi. It would be great if the index could be exposed to the public api in some way. |
That's actually pretty fast. Did you index all the subfolders (asyncio, multiprocessing, json, etc)? Also can you post the script? I wonder if it's "complete". |
@hajs I would still be interested :) |
Are there any next steps on this issue? Maybe one more friendly ping for @hajs's solution.. |
@davidhalter are you still interested in this idea? I'm currently planning on building out a database of all potential imports using Jedi for the symbol inspection. Would you be interested in the issues I find? If so, what would be the best format to report errors in? |
I'm definitely interested in your findings, but as I said above, it's pretty unlikely that Jedi's architecture is going to change a lot. There are a lot of underlying issues. I'm currently rewriting parso in Rust and having a great time (it's not open source yet, though). |
@davidhalter very interested in contributing to rust version of parso and Jedi when you open them up. |
Will post it here once it's in a good shape. However I want to do a lot of things the right way this time so I'm keeping it private for now. I have been working on the parser for the last three months, but I unfortunately don't have a lot of time for it. |
Thank you for working on this! In the meantime, would it be appropriate to have I profiled some language servers using jedi and it appears that |
What did you profile? Can you share the results? |
This is a tricky one. Basically it's definitely not possible to do this in a general way, because the Jedi caches need to be invalidated somehow if a library changes. This is exactly what this issue is about. However, I thought that we could maybe use the cache just if I think I would just argue that |
Thank you for getting back to me. I worked around this deferring the call to palantir/python-language-server@develop...krassowski:feature/asynchronous/labels-cache It was tricky, especially with the jedi being not exactly thread-safe but adding a lock solves the issue. I decided to use a custom cache key instead of the default hash implementation (to avoid inclusion of Your replay will certainly help to plan for the future, and potentially to upstream such an approach. I got down to <<1 second for |
Note that with such an approach you're also losing some of Jedi's correctness. I would really recommend to use something like In general almost all other libraries are not an issue, because they do not export a thousand functions in one module. The culprits are always |
Thank you. I gave up on the asynchronous approach, and followed your advice to treat the likes of
I understand. I will try to nudge the popular language servers in this direction (but it might take time as it is only possible with recent LSP 3.16 and many clients believe that the label - which is what the signature is being used for - should be available from the beginning). Nonetheless, I will be very happy to see any performance improvements to |
This comment has been minimized.
This comment has been minimized.
This sounds like a very interesting task, I'm not sure what the etiquette is in regards to helping out but I would be interested in contributing to the rust re-implementation of Jedi 👍 |
For a lot of things (especially usages) jedi's completely lazy approach is not good enough. It is probably better to use a database index cache. The index will basically be a graph that saves all the type inference findings.
This is just an issue for discussion and collection of possible ideas.
The text was updated successfully, but these errors were encountered: