Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

painfully slow auto completions #1422

Closed
kirk86 opened this issue Oct 11, 2019 · 15 comments
Closed

painfully slow auto completions #1422

kirk86 opened this issue Oct 11, 2019 · 15 comments

Comments

@kirk86
Copy link

kirk86 commented Oct 11, 2019

I've tried all of the above
elpy, anaconda-mode, company-jedi and in all occasions I find the completions to be extremely slow even when you force to have completions on dot still you'll have to wait a couple of seconds until completions become available.

Is this normal?
Maintainers of some of the above packages believe that is caused by jedi, discussion here towards the end.
I see this but don't know what you mean or how to utilise it?

@davidhalter
Copy link
Owner

Can you give a bit more context? I know that completions are slow sometimes, but they are not always. So when are they slow for you?

Also the database index ticket in #1059 should be there to cache certain results from type inference, which can make completions faster by a lot.

I'm also thinking about rewriting Jedi in Rust, but that's a lot of work. Also speaking for completions generally, it's just non-trivial to make them fast AND correct in Python. I've been trying for years and it's probably cost me about three years of working "full time" on this (I've been working on this problem for almost 7 years now).

@kirk86
Copy link
Author

kirk86 commented Oct 14, 2019

@davidhalter Hi Dave, thanks for the reply.

So when are they slow for you?

This happens pretty much with any package that I use. For instance, I have a dummy python file and include packages like numpy and matplotlib, in order to get completions for those I have to write np. and stop typing and wait until completions become available. There has never been a case where completions show up on the fly as I type and I'm a lame typist, not fast at all compared to some other folks. Recently I tried the microsoft lsp protocol and I have to say is much snappier than anything I've tried previously. Maybe that's a good place to start and see if there are techniques portable over to jedi. Although I have zero knowledge about the internal workings of either jedi or ms lsp.

I've been trying for years and it's probably cost me about three years of working "full time" on this (I've been working on this problem for almost 7 years now).

Believe me I completely understand the frustrations that you might have gone through and that's why I appreciate the work that folks like yourself are doing. I consider it as a service to the whole community!

@davidhalter
Copy link
Owner

Recently I tried the microsoft lsp protocol and I have to say is much snappier than anything I've tried previously.

There's a reason for that. They use a type inference cache (essentially a database). This enables certain things, but it's also not always the greatest solution. There's a reason why VScode still defaults to Jedi for completions and not their own language server. Jedi is still better if you don't use one of numpy/tensorflow/matplotlib/scipy, etc.

Essentially if you do non-scientific development Jedi is usually almost instant (after the first few completions).

@davidhalter
Copy link
Owner

davidhalter commented Oct 16, 2019

I'm closing this in favor of the database solution in #1059. I see really no other solution. Feel free to keep asking questions. I'm happy to answer.

@kirk86
Copy link
Author

kirk86 commented Oct 16, 2019

Thanks, Dave

This enables certain things, but it's also not always the greatest solution

I know, found it recently as well.

@davidhalter
Copy link
Owner

Jedi just tries to be correct as much as possible. I've been working on speeding up Jedi by a lot in the last 2-3 months, but that's just the beginning. It will probably take 2-3 years until I have something ready that makes sense.

@ghshephard
Copy link

Thanks for writing this - I've been spending a bit of type with autocomplete tonight (Python 3.8,0, iPython 7.13.0, jedi 0.17.0) - and I've noticed that for an example like this:

import pandas as pd
df=pd.DataFrame([[1,2,3]],columns="A B C".split())

That trying to autocomplete

df['A'].me [tab]

In iPython takes ~2 seconds, with jedi, but takes about 250ms with jedi disabled with:

c = get_config()
c.Completer.use_jedi = False 

I'm glad to know it's not just me, and that jedi performance is an ongoing development - definitely looking forward to checking back in as new versions are released - thanks very much.

@davidhalter
Copy link
Owner

@ghshephard Is it also slow after the first initial slowness? Because one of the problems is #1059. Jedi doesn't do a lot of caching outside of its process. Therefore initial completions can be quite slow. :/

@ghshephard
Copy link

Yes, the behavior is consistent on the second and further uses. I'll follow the jedi project and keep re-trying my 3-line test case and report back if anything changes. Thanks for all your hard work!

@davidhalter
Copy link
Owner

What's your hardware like?

@ghshephard
Copy link

64 Gig / i7-8700 @3.2Ghz - 6core/12Th - The system is reasonably beefy - and isn't slow on any other operations that I can discern. 1157 single core, 6032 multicore GB 5 score.

@davidhalter
Copy link
Owner

SSD or HDD or Cloud?

@ghshephard
Copy link

ghshephard commented Apr 18, 2020 via email

@davidhalter
Copy link
Owner

I would argue that it's like 500ms for me. 2s seems very long to me, especially because your system seems more modern than mine - I'm working on a 4 year old notebook. I'm wondering if this was related to Linux.

I'm glad to know it's not just me, and that jedi performance is an ongoing development - definitely looking forward to checking back in as new versions are released - thanks very much.

I'm not sure how to optimize much further. There is of course #1059. But I might just do that stuff in Rust to get it really fast. However this might mean that Jedi won't be getting that kind of cache. This means that I don't really see a good solution at the moment. :/

There is already some kind of trivial caching for pandas which makes it faster, but it's just a lot of identifiers...

@davidhalter
Copy link
Owner

@ghshephard I had one more idea to improve speed (without odd side effects). I think this should have helped quite a lot in your case.

It's still feels strange to me that Jedi is so much faster when I use it, but this should help nonetheless (it improved it for me as well).

So please test, I think it's pretty much as simple as pip install -e git+https://github.com/davidhalter/jedi, IIRC.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants