-
Notifications
You must be signed in to change notification settings - Fork 3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Stuck in long-running loop for already satisfied requirement when using resolver. #8883
Comments
I used third party dependency resolver (poetry) to resolve all abstract dependencies I have, and found a conflict. Here's the list:
The conflict occurs on we're installing
It took only 36.9 seconds to find the conflict. I would rephrase this issue as "very slow (infinite loop?) resolution when having a conflict, with not helpful logging" |
I re-wordrd the title to try reflecting the issue at hand here. |
I’ve been trying to work on this, but found it difficult to come up with a test case to compare various implementations. Would it be able for you to provide some sort of workflow or requirements.txt that I can run |
The one I mentioned in the seconds comments is a one sample (as a requirements.txt) that stucks when running pip resolver. (checked now). The message is not exactly the same as the first one though
|
Can you share the requirements.txt’s content? I can’t craft one that exhibits the problem. |
Hmm, so there is a bug in here. The resolver is trying to pin tensorflowjs 2.0.1.post1 and 2.1.0 repeated. Neither of them is satosfactory, but for some reason the resolver does not know it should give up. So bottom line is, the set of requirements here have internal conflicts and can never resolve, you’ll need to fix that. But the resolver isn’t correctly detecting the conflict, which is a bug. Not sure why that’s happening, could be a logical bug in the resolver or a misimplementation in the provider. The next thing to do here is to come up with a minimal reproducible; the current requirements list is way too big and would take too long to debug. |
This is the same as #9011, not #8713 -- the new resolver is expected to try different versions of a package during backtracking. What it's not expected to do, is be stuck between trying to pin the exact same package versions, as mentioned by @uranusjr above.
I'll close this as a duplicate of #9011, since it has more discussion related to the incorrect pinning behaviour. |
What did you want to do?
Installing
tfx
package with head versions of some dependencies.Due to the Google's monolithic code system, during the development
tfx
package is depending on the HEAD version of some internal dependencies, such astensorflow-data-validation
,tensorflow-model-analysis
, etc. (There are total 5 depending libraries). We're installing head versions by building wheels from the source code of dependencies, and then pass all wheels to thepip install
command. So, this is a bit hard to reproduce, but I hope we can get some debugging idea or the possible failure cause from the deceptive error logs.Output
Additional information
The text was updated successfully, but these errors were encountered: