-
-
Notifications
You must be signed in to change notification settings - Fork 507
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Speed issues with benchmarks #910
Comments
Just to be clear: v0.10.2 also has the same problem? (you are using it in the |
Yes, Even |
Can you provide some source to trigger/reproduce this? |
Well, step to reproduce would be: clone / install jedi-vim (since, the submodule in that is update to the latest version of jedi, v0.10.2) then : $ python3 -m venv testdir # wait for it to get created
$ cd testdir
~/testdir $ . bin/activate
(testdir) ~/testdir $
# for this example I'm using, matplotlib package
(testdir) ~/testdir $ python3 -m pip install matplotlib
# wait for the package to get installed
(testdir) ~/testdir $ gvim plot_graph.py
# any filename would do (this is just example)
Now, inside of vim, type :
|
This is certainly strange. So you get completions on matplotlib but they are slow? While I'm not using matplotlib I'm pretty certain Jedi got a lot faster in the last version upgrade. The tests are running quite a bit faster and there haven't been changes to make it slower IMO. Is the matplotlib completion slower the second time you use it? It would be hugely beneficial if your reproduction steps wouldn't be an editor setup. This is Jedi and not jedi-vim. Bug reports here should look like this:
You could enable speed debugging if you do this as well. |
First off, Sorry I had to delete the previous comment. I had accidentally hit "comment", which submitted off the comment, and that comment was incomplete. I'm sure you'd have gotten a mail, and there are lots of differences between the original comment, and this one. So, I thought, might as delete it, and restart afresh. Sorry about that ! My Apologies for this delayed comment / response, I was trying out Jedi with various python packages.
Yes. So, First I want to say that, Yes the delay occurs for packages with lots of modules / functions to import, but I felt that, it was slightly slower over all. (To see how much the delay was, I ran the above the trial in a ipython terminal, as per your suggestion) One thing I want to mention before I go further is that, in the case of numpy, the whole package is imported, usually as This was tested on :
First is with In [1]: import jedi
In [2]: jedi.__version__
Out[2]: '0.9.0'
In [3]: %time jedi.Script('import matplotlib.').completions()
CPU times: user 50.3 ms, sys: 2.97 ms, total: 53.3 ms
Wall time: 53.3 ms
Out[3]:
[<Completion: afm>,
<Completion: animation>,
<Completion: artist>,
<Completion: axes>,
<Completion: axis>,
<Completion: backend_bases>,
<Completion: backend_managers>,
<Completion: backend_tools>,
<Completion: backends>,
<Completion: bezier>,
...
]
In [4]: # cached matplotlib
In [5]: %time jedi.Script('import matplotlib.').completions()
CPU times: user 7.87 ms, sys: 1.92 ms, total: 9.8 ms
Wall time: 9.43 ms
Out[5]:
[<Completion: afm>,
<Completion: animation>,
<Completion: artist>,
<Completion: axes>,
<Completion: axis>,
<Completion: backend_bases>,
<Completion: backend_managers>,
<Completion: backend_tools>,
<Completion: backends>,
<Completion: bezier>,
...
]
In [6]: %time jedi.Script('from numpy import ').completions()
CPU times: user 686 ms, sys: 325 ms, total: 1.01 s
Wall time: 686 ms
Out[6]:
[<Completion: abs>,
<Completion: absolute>,
<Completion: absolute_import>,
<Completion: add>,
<Completion: add_docstring>,
<Completion: add_newdoc>,
<Completion: add_newdoc_ufunc>,
<Completion: add_newdocs>,
<Completion: alen>,
<Completion: all>,
...
]
In [7]: # cached numpy
In [8]: %time jedi.Script('from numpy import ').completions()
CPU times: user 87.6 ms, sys: 10.6 ms, total: 98.2 ms
Wall time: 98.6 ms
Out[8]:
[<Completion: abs>,
<Completion: absolute>,
<Completion: absolute_import>,
<Completion: add>,
<Completion: add_docstring>,
<Completion: add_newdoc>,
<Completion: add_newdoc_ufunc>,
<Completion: add_newdocs>,
<Completion: alen>,
<Completion: all>,
...
]
In [9]: %time jedi.Script('from collections import ').completions()
CPU times: user 75.4 ms, sys: 7.06 ms, total: 82.5 ms
Wall time: 82.7 ms
Out[9]:
[<Completion: abc>,
<Completion: ABCMeta>,
<Completion: abstractmethod>,
<Completion: AsyncIterable>,
<Completion: AsyncIterator>,
<Completion: Awaitable>,
<Completion: bytearray_iterator>,
<Completion: bytes_iterator>,
<Completion: ByteString>,
<Completion: Callable>,
...
]
In [10]: # cached collections module
In [11]: %time jedi.Script('from collections import ').completions()
CPU times: user 24 ms, sys: 2 µs, total: 24 ms
Wall time: 23.9 ms
Out[11]:
[<Completion: abc>,
<Completion: ABCMeta>,
<Completion: abstractmethod>,
<Completion: AsyncIterable>,
<Completion: AsyncIterator>,
<Completion: Awaitable>,
<Completion: bytearray_iterator>,
<Completion: bytes_iterator>,
<Completion: ByteString>,
<Completion: Callable>,
...
] (pythons) $ python3 -m pip install jedi --upgrade
Collecting jedi
Using cached jedi-0.10.2-py2.py3-none-any.whl
Installing collected packages: jedi
Found existing installation: jedi 0.9.0
Uninstalling jedi-0.9.0:
Successfully uninstalled jedi-0.9.0
Successfully installed jedi-0.10.2 Second is with In [1]: import jedi
In [2]: jedi.__version__
Out[2]: '0.10.2'
In [3]: %time jedi.Script('import matplotlib.').completions()
CPU times: user 107 ms, sys: 978 µs, total: 108 ms
Wall time: 109 ms
Out[3]:
[<Completion: afm>,
<Completion: animation>,
<Completion: artist>,
<Completion: axes>,
<Completion: axis>,
<Completion: backend_bases>,
<Completion: backend_managers>,
<Completion: backend_tools>,
<Completion: backends>,
<Completion: bezier>,
...
]
In [4]: # cached matplotlib
In [5]: %time jedi.Script('import matplotlib.').completions()
CPU times: user 7.08 ms, sys: 60 µs, total: 7.14 ms
Wall time: 6.82 ms
Out[5]:
[<Completion: afm>,
<Completion: animation>,
<Completion: artist>,
<Completion: axes>,
<Completion: axis>,
<Completion: backend_bases>,
<Completion: backend_managers>,
<Completion: backend_tools>,
<Completion: backends>,
<Completion: bezier>,
...
]
In [6]: %time jedi.Script('from numpy import ').completions()
CPU times: user 3.47 s, sys: 320 ms, total: 3.79 s
Wall time: 3.47 s
Out[6]:
[<Completion: abs>,
<Completion: absolute>,
<Completion: absolute_import>,
<Completion: add>,
<Completion: add_docstring>,
<Completion: add_newdoc>,
<Completion: add_newdoc_ufunc>,
<Completion: add_newdocs>,
<Completion: alen>,
<Completion: all>,
...
]
In [7]: # cached numpy
In [8]: %time jedi.Script('from numpy import ').completions()
CPU times: user 221 ms, sys: 2.05 ms, total: 223 ms
Wall time: 225 ms
Out[8]:
[<Completion: abs>,
<Completion: absolute>,
<Completion: absolute_import>,
<Completion: add>,
<Completion: add_docstring>,
<Completion: add_newdoc>,
<Completion: add_newdoc_ufunc>,
<Completion: add_newdocs>,
<Completion: alen>,
<Completion: all>,
...
]
In [9]: %time jedi.Script('from collections import ').completions()
CPU times: user 268 ms, sys: 4.94 ms, total: 273 ms
Wall time: 325 ms
Out[9]:
[<Completion: abc>,
<Completion: ABCMeta>,
<Completion: abstractmethod>,
<Completion: AsyncIterable>,
<Completion: AsyncIterator>,
<Completion: Awaitable>,
<Completion: bytearray_iterator>,
<Completion: bytes_iterator>,
<Completion: ByteString>,
<Completion: Callable>,
...
]
In [10]: # cached collections module
In [11]: %time jedi.Script('from collections import ').completions()
CPU times: user 22.7 ms, sys: 932 µs, total: 23.7 ms
Wall time: 23.3 ms
Out[11]:
[<Completion: abc>,
<Completion: ABCMeta>,
<Completion: abstractmethod>,
<Completion: AsyncIterable>,
<Completion: AsyncIterator>,
<Completion: Awaitable>,
<Completion: bytearray_iterator>,
<Completion: bytes_iterator>,
<Completion: ByteString>,
<Completion: Callable>,
*(numpy without cache took a full 3 secs) Except in the case of matplotlib with cache, This is in terminal, when I ran the delay was some what not that of a deal, but in vim with jedi-vim, it is definitely noticeable Edit : One other metric I wanted to add was for the
Turns out |
Thanks for the research!! I like those comparisons, because this makes it easier to look for the actual issues. I'm not too surprised that the values differ quite a bit. There's so many things that have changed in those versions:
I'm definitely leaving this open. Do you still have issues that you don't get certain completions or is it just about speed/performance now? |
Sorry for the late response, had been busy ! 😅
The issue is mainly about speed / performance. The completions do pop-up, but are slower, thats it ! |
On my machine, completions for numpy is even slower. Using v0.10.2 I need 5s to complete |
|
I have investigated quite a bit. The problem seems to be coming from multiple type inference improvements and security features. It currently takes 2-3 times longer than in older versions. A big part of the issue is the huge amount of names numpy imports. IMO the actual solution for this would be a database index #1059. Even 100 ms is pretty slow IMO. The database index would solve quite a lot of issues as well as speed. The way how numpy defines 1000 different objects that can be imported, just doesn't go well with jedi. In general this is not the way how people program in Python. Therefore you shouldn't assume that jedi is now slower than before. It's just slower for numpy. |
Thanks again for the benchmarks. I'm closing this issue in favor of #1059. I know that Jedi has performance issues with |
I recently update vim-jedi, and with it, the jedi submodule, and it broke. Before I used to have near instantaneous completion suggestions when I typed in a dot, but after the update, it all broke. Any type of dot completions either take very long, or they completely fail (with vim saying Omni completion pattern not found)
So I had a rough idea that, I was on jedi v0.9.0 (a few commits after the official tagging of 0.9.0), and then after update, i moved to v0.10.0.
I ran
git diff v0.9.0..v0.10.0
which generated this: jedi-diff-oneline diff. (the link points to a gist)I went ahead and manually tested through quite a few number of commits and added a few markers for when a certain commit failed.
-XXX-
(or-X-
) --> indicates that the commit failed / broke the plugin-***-
--> indicates that the commit works ! (triple asterisk)(sorry for the weird markers ! 😅)
and @blueyed had mentioned that he had used git-bisect to 8f4b862 to figure out that, the commit affected the plugin, but the bug had originated at a much earlier commit itself, 2be5da3, which is where from the problem starts.
ebd080a is (roughly) the last commit where it provided the hints (Eg: if the completing word was a class/function/module; such as
matplotlib module:matplotlib
but on later commits we only seematplotlib matplotlib
)Some more description is provided next to the works / fails markers, in the gist (eg: if the dot fails, and
:JediDebugInfo
works, or vice-versa). I plan to update the update the gist with more testing in my free time, in the mean time I hope you'd find the gist useful, and make it easier for you to find the bug.Also, Thanks for the great plugin
Edit:
Some info about my setup
I'm using Python3 on a Fedora 24 system
vim version :
(Before running JediDebugInfo, I reset the submodule to point the most latest commit on my machine:
5427b02)
running
JediDebugInfo
inside vim gives :The text was updated successfully, but these errors were encountered: