Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

auto complete suggestion takes almost 6 seconds [Performance] #576

Closed
killown opened this issue Jan 12, 2018 · 7 comments
Closed

auto complete suggestion takes almost 6 seconds [Performance] #576

killown opened this issue Jan 12, 2018 · 7 comments
Labels
area-intellisense LSP-related functionality: auto-complete, docstrings, navigation, refactoring, etc. bug Issue identified by VS Code Team member as probable bug

Comments

@killown
Copy link

killown commented Jan 12, 2018

Auto complete suggestion takes almost 6 seconds to appear in a file com 900 lines

Environment data

VS Code version: 1.19.1
Python Extension version: 0.9.1
Python Version: 3.5
OS and version: Manjaro "updated"

Actual behavior

auto complete takes 6 seconds to appear

Expected behavior

appear instantaneously

Steps to reproduce:

Logs

Output from Python output panel

Output from Console window (Help->Developer Tools menu)

workbench.main.js:140176 [Extension Host] debugger inspector at %cDebugger listening on port 9334.
Warning: This is an experimental feature and could change at any time.
To start debugging, open the following URL in Chrome:
    chrome-devtools://devtools/bundled/inspector.html?experiments=true&v8only=true&ws=127.0.0.1:9334/d2af9450-6983-4e59-8a41-0ac4a7f5dbd8
workbench.main.js:18913 [Extension Host] (node:16144) DeprecationWarning: os.tmpDir() is deprecated. Use os.tmpdir() instead.
@brettcannon
Copy link
Member

Can you provide us with sample code that triggers the 6 second delay?

@brettcannon brettcannon added bug Issue identified by VS Code Team member as probable bug area-intellisense LSP-related functionality: auto-complete, docstrings, navigation, refactoring, etc. info-needed Issue requires more information from poster labels Jan 12, 2018
@MikhailArkhipov
Copy link

It has been a while. We need sample code to see whether the issue is in the jedi or elsewhere. Feel free to reactivate.

@balta2ar
Copy link

balta2ar commented Feb 1, 2018

I'd like to share my experience. I also noticed really annoying delays. Sample code is very simple:

from tensorflow import tf
tf.

I'm using tensorflow 1.5, python 3.6 in a conda environment (someone in the comments noticed that jedi works faster with python3 than with python2).

Actually I mentioned this issue here: davidhalter/jedi#937 (comment)
I'm not 100% sure who's causing these delays: VSCode or solely jedi. I said in the comment that delay is about 2 sec. Now it feels more closer to 1 sec but still if you type tf. a lot, the delay really gets on your nerves because it's far from instant. And it's definitely larger than 300ms which we can see in "synthetic" jedi test mentioned in the comment (I'd be glad to have only 300ms!). Could it be that VSCode is discarding some data structures (maybe along with jedi itself) that could be kept and reused?

I'm not sure how to track and it down and measure exactly. Besides, VSCode uses not the latest version of jedi, which might be different in terms of performance (but that's not certain). At least it could be checked whether VSCode adds up to that delay, right?

@brettcannon brettcannon added awaiting 1-verification and removed info-needed Issue requires more information from poster labels Feb 1, 2018
@brettcannon brettcannon reopened this Feb 1, 2018
@DonJayamanne
Copy link

@balta2ar
Please could you download our Insiders Build of the extension and give this a go.
We're using the latest version of Jedi and this improves the performance significantly.
At my end the autocompletion list popups up in under 1 second.

Please try the following code:

from tensorflow import <>

Or

import tensorflow
tensorflow.<>

The following code doesn't provide any autocompletion (Jedi issue):

from tensorflow import tf
tf.

@DonJayamanne DonJayamanne added info-needed Issue requires more information from poster and removed needs verification labels Feb 22, 2018
@balta2ar
Copy link

I've installed from VSIX and now the version says "2018-2.0-beta", so hopefully I'm trying the right version.

However, the delay is still there for me. I agree it's around one second but it's still annoying... I can't say it got any faster after update, but that's just my subjective perception. I'd be glad to give actual numbers if you could guide me with instrumenting the code, I'm just not sure where and how to put measurements.

Though, as David said in another jedi issue, we may very well be approaching the limits of jedi in terms of performance and it's hard to squeeze any more without aggressive caching.

I think I mistyped in my comment and I wanted to write this:

import tensorflow as tf
tf.

but you got me right.

@MikhailArkhipov
Copy link

You may be interested in https://github.com/Microsoft/vscode-python/projects/7#card-8587079 then :-)

@MikhailArkhipov
Copy link

No delay with VS engine. Comparable to PyCharm.

@MikhailArkhipov MikhailArkhipov self-assigned this Apr 25, 2018
@MikhailArkhipov MikhailArkhipov added this to the May 2018 milestone Apr 25, 2018
@brettcannon brettcannon modified the milestones: May 2018, June 2018 May 7, 2018
@brettcannon brettcannon added needs upstream fix and removed info-needed Issue requires more information from poster labels May 7, 2018
@brettcannon brettcannon removed this from the June 2018 milestone Jun 4, 2018
@MikhailArkhipov MikhailArkhipov removed their assignment Jul 10, 2018
@lock lock bot locked as resolved and limited conversation to collaborators Oct 15, 2018
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
area-intellisense LSP-related functionality: auto-complete, docstrings, navigation, refactoring, etc. bug Issue identified by VS Code Team member as probable bug
Projects
None yet
Development

No branches or pull requests

5 participants