-
-
Notifications
You must be signed in to change notification settings - Fork 611
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Use installed packages to solve dependency graph #1596
Comments
OK, I don't have a great handle on every aspect of the caching, and I don't know if this is 100% satisfactory, but this can be worked around somewhat by copying/mounting/sharing a very small json cache file (not the wheel itself). I used the container to run {
"__format__": 1,
"dependencies": {
"torch": {
"1.10.0": [
"typing-extensions"
]
},
"typing-extensions": {
"4.1.1": []
}
}
} And now: $ podman run --rm -it -v $PWD/depcache-cp3.7.json:/root/.cache/pip-tools/depcache-cp3.7.json:rw docker://pytorch/pytorch:1.10.0-cuda11.3-cudnn8-runtime bash
# pip install pip-tools
# echo "torch==1.10.0" >requirements.in
# time pip-compile #
# This file is autogenerated by pip-compile with python 3.7
# To update, run:
#
# pip-compile
#
torch==1.10.0
# via -r requirements.in
typing-extensions==4.1.1
# via torch
real 0m1.662s
user 0m0.427s
sys 0m0.048s |
Thanks for looking into this so quickly! I'll try this out. It's not ideal - I'll have to explain the magic config file, but it's a better workaround than mine, which downloads the wheel and caches the wheel in the docker build before installing pip-tools. |
Maybe this is a problem with how pip/pypi works. The dependency graph should be solvable without downloading ANY packages through pypi exposing some small file(s) or api. I think it would still be nice to have requirements.in parse something like
to ignore/trust an existing package, but maybe I'm alone in that. update: |
I'm still interested in this issue, and I still don't have all the answers. But I'll add now another workaround, oriented to your last suggestion. Be warned: it sacrifices the total locking guarantees, but will "probably" (😓) be fine: # echo "transformers" >>locked.in
# pip-compile locked.in
# echo "-r locked.txt" >>requirements.txt
# echo "torch" >>installed.txt
# echo "-r installed.txt" >>requirements.txt
# pip install -r requirements.txt |
I'm ok with sacrificing "total locking" guarantees, as some of the locking is done by pinning a docker base image. Thanks for the new workaround, but I don't think would fix the situation, since transformers depends on torch (which you wouldn't have known a priori). I'm in a meeting but can test in a bit. |
Oh yeah sorry, this method won't help in this case. |
I think this is it really. Unfortunately, especially with That said, in the container you're using, the relevant info does seem to be available in And in normally installed packages, we may find the details needed in e.g. So maybe we can update our cache file with data from those sources. If we do, I don't know if it should be done by default, as it has different security implications than using the PyPI data/packages. I'll also link some related issues: |
Having an option to look through |
I have a similar problem, but in my case the dependency1 I have installed in my Docker image is not available on PyPI or our internal index (but may be in our internal index in the future2), due to it requiring system libraries. This means I need one of the first two options in the OP (so I can lock all dependencies): consider installed (preferred) or ignore specific packages. Footnotes
|
Since this a an edge case relevant to my docker workflow, I'll post my docker workaround: create a docker cache for the pip cache:
That should avoid repeatedly downloading big wheels (until the cache is cleared). It might be better to use a real mount instead of a cache mount, but I'm no docker expert so, by all means, experiment. |
@sfarina you closed this as completed: could you please link the pull request which completed this issue? Or are you thinking that your Docker cache mount solves your use case, in which case could you please instead close as won't fix? |
won't fix / stale |
What's the problem this feature will solve?
I'm trying to build stable
requirements.txt
files for docker containers built on top of existing containers (specificallypytorch/pytorch:1.10.0-cuda11.3-cudnn8-runtime
). To save on image size, these containers don't have the pip cache intact, so pip-compile takes a few minutes to download a large (~1GB) wheel, defeating part of the purpose of using a base docker image. I run pip-compile inside of adocker build -f pipcomile.dockerfile
Describe the solution you'd like
any of:
Alternative Solutions
Additional context
The text was updated successfully, but these errors were encountered: