-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Performance of Code Completion for LARGE files #23
Comments
Unfortunately 40,000 lines of Python code is simply a large file to process. We use Jedi for our intellisense and it seems to not be handling a file of that magnitude that well. I'm assuming you still want the intellisense, just faster? Or do you want to just turn it off to avoid the penalty? Otherwise we have discussed trying to share the intellisense engine between us and PTVS (would probably be a new one, not the one it currently has), but it would require downloading .NET and no concrete plans beyond "it's a possiblity" have been had yet. |
Jedi is unfortunately slow. I just confirmed that with a simple test.
This may not be an adequate comparison or usage, but using the code above, it takes less than 5 seconds to build the list of We'd definitely want Intellisense. The classes wouldn't be usable without Intellisense. One issue is that it the penalty for extracting the completion data from a Python file appears to be each time Intellisense is requested, rather than being cached. However, I'm not sure if users would accept a 20 second delay, even once per VS code launch for a file. (And a 5 second delay if that's the fastest it could be due to Jedi performance probably isn't acceptable either more than once a session.) Ideally, the Jedi results would be cached until they've become stale or invalidated. As the files I'm talking about are generated, and very infrequently changing, a one time Jedi-to-cache file result would be a very effective way of reducing the time taken by further usage of Intellisense. Further, in our case, it would be great if the cache files could be deployed to a cache directory or as side-car files so that we'd Jedi-compile them once per Python file generation and then no user's VS Code instance would need to locally process the file, ever. If the results were to be cached, especially for larger Python files, I would expect many Python extension users in VS Code would benefit. |
@brettcannon -- any thoughts about this issue? |
In the new year we're hoping to start looking into our intellisense performance and quality so we can begin to address the issues you and others are running into with it. |
This may be improved by fix for #152 (no longer immediately loading method docs, just names) |
@wiredprairie - is it better in 0.9? |
@MikhailArkhipov - unfortunately, it still takes 20+ seconds. |
@wiredprairie would it be possible to upload a test file for us to benchmark against? (BTW we are upgrading Jedi to 0.12.0; it probably won't resolve this issue but you never know 😉 ). |
@brettcannon This is still a good example. |
@wiredprairie Oops, sorry for not noticing the hyperlink in the initial message! |
Its been almost a year and IntelliSense is still so slow... |
[ @MilkyHearts I edited your ✉️ ⬆️ to come off as more 😃 and less 😠 ] |
@MilkyHearts @wiredprairie I can't make any promises about stability, accuracy, etc., but if you go into your settings and put We are also working on it regularly, so if you end up wanting to try newer bits later before we release a public preview, just go into the extension's directory and delete the But as I said, this is not even public preview for a reason, so no promises about anything. 😁 |
@brettcannon OMG THANK YOU! the autocomplete is instant after setting jediEnabled to false im so happy. |
…dInputExpanders Themed input expanders
Closing as this is an upstream issue which we don't have direct control or influence over. |
Environment data
VS Code version: 1.18.0
Python Extension version: 0.7.0
Python Version: 3.6.0
OS and version: Microsoft Windows [Version 10.0.15063]
Actual behavior
I've written a tool that given a tabular data structure with many thousands of columns creates a new Python class definition (into a .py file). The resulting Python file is used by data scientists to access the columns of the data structure in a programmatic way.
Each column results in generated code in this pattern currently:
For generated files with hundreds of columns, the VS Code Python extension works spectacularly. However, for large files, it slows to a crawl. For example, one data structure results in a Python file with 40,000+ lines.
When using code completion in VS Code, it takes 20+ seconds to show the list of properties. Needless to say, that's too slow. And unfortunately, it's not a one time cost as every instantiation of the code completion feature causes the same delay.
I can adjust the output of the my code generation tool, but I don't know what I could do to reduce the parse time for code completion.
I did experiment and removed the documentation from the Python file and that had very little if any noticeable effect on the time taken to show matches.
Expected behavior
Fast code completion suggestion list (under 1 second).
Steps to reproduce:
Super large python file with class with thousands of properties is needed. I've created this public gist with an example.
Logs
Output from
Python
output panelEmpty.
Output from
Console window
(Help->Developer Tools menu)Empty
The text was updated successfully, but these errors were encountered: