Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ABI pinning #40

Closed
jaimergp opened this issue Mar 16, 2021 · 13 comments
Closed

ABI pinning #40

jaimergp opened this issue Mar 16, 2021 · 13 comments

Comments

@jaimergp
Copy link
Member

jaimergp commented Mar 16, 2021

What's the recommended approach to pin pytorch when there are dependencies that link directly to the C/C++ libraries? (Example).

According to pytorch/pytorch#28754, there are no ABI guarantees... So, should we build against every released major.minor version like we do with cudatoolkit? In that case, should that be submitted to conda-forge-pinning-feedstock (at least the pin_compatible kwargs)?

@rgommers
Copy link

I don't know enough about pin_compatible, but for the ABI part: yes indeed, there is no ABI stability and it changes a lot. This is why pytorch, torchvision, torchaudio, torchtext, etc. are all released in sync. You want to build against every minor version and pin that at runtime.

Technically I believe even bug fix releases are not guaranteed to be ABI-compatible. I don't know how often ABI changes in bug fix releases; I do see that torchaudio, torchvision et al. get new releases on PyPI for PyTorch bug fix releases.

@jaimergp
Copy link
Member Author

Thanks for the clarification, @rgommers! @conda-forge/core, is this a good candidate case to add to conda-forge-pinning-feedstock?

@beckermr
Copy link
Member

Yes I'd think that'd be a fine solution. This will cause rebuild migrations as the ABI moves.

@rgommers
Copy link

Technically I believe even bug fix releases are not guaranteed to be ABI-compatible. I don't know how often ABI changes in bug fix releases

@seemethere any advice here?

@rgommers
Copy link

Technically I believe even bug fix releases are not guaranteed to be ABI-compatible. I don't know how often ABI changes in bug fix releases

I got an authoritative answer here: it may be okay for bugfix releases, but there are no guarantees and PyTorch does not have tooling or CI to actually test binary compatibility. So it's better to rebuild all downstream packages which use the C++ API.

Downstream packages should be quite light-weight compared to PyTorch itself to build (torchvision is the heaviest one) and there aren't that many.

@h-vetinari
Copy link
Member

@hmaarrfk
Copy link
Contributor

@hmaarrfk
Copy link
Contributor

Just to confirm, should we be pinning cpu/gpu as well? If somebody builds with CPU support, can they use the GPU builds of pytorch and vice versa?

@hmaarrfk
Copy link
Contributor

@h-vetinari
Copy link
Member

Just to confirm, should we be pinning cpu/gpu as well? If somebody builds with CPU support, can they use the GPU builds of pytorch and vice versa?

I think explicit is better than implicit here.

Currently, we only pin "general" pytorch https://github.com/conda-forge/pytorch-cpu-feedstock/blob/master/recipe/meta.yaml#L24

PR for discussion: #58

@rgommers
Copy link

Just to confirm, should we be pinning cpu/gpu as well? If somebody builds with CPU support, can they use the GPU builds of pytorch and vice versa?

It depends who "somebody" is. I assume only libraries which themselves implement CUDA code. For pure Python packages and those which only use a bit of the PyTorch (cpu) C++ API, I don't see a need to force them to have separate CPU and GPU packages.

@hmaarrfk
Copy link
Contributor

@rgommers i tend to agree with you.

I think that we can suggest that people depend on the specific pytorch-cpu or pytorch-gpu metapackages when they need to pin against gpu specific ABI.

@hmaarrfk
Copy link
Contributor

We have added conda-forge wide pinnings.

The discussion on choosing package names has moved to
#74

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants