-
Notifications
You must be signed in to change notification settings - Fork 7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
make C extension lazy-import #971
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks!!
Nit: torchvision/ops/roi_pool.py
should also have exactly the same change as torchvision/ops/roi_align.py
.
I've tested hub with that change added and confirmed it works.
Codecov Report
@@ Coverage Diff @@
## master #971 +/- ##
=========================================
+ Coverage 61.07% 61.1% +0.02%
=========================================
Files 64 65 +1
Lines 5082 5091 +9
Branches 763 764 +1
=========================================
+ Hits 3104 3111 +7
- Misses 1767 1769 +2
Partials 211 211
Continue to review full report at Codecov.
|
So hub currently doesn't support loading detection models, or models with custom ops? |
hub supports both, we just have to update logic for them. see my follow-up tasks above.
Fixing a packaging problem is the solution -- not making a mono-repo. |
@soumith Please correct me if I’m wrong, hub cannot support detection models since it will still try to import _C (lazily) right?
Thanks,
Ailing
…________________________________
From: Soumith Chintala <[email protected]>
Sent: Thursday, May 30, 2019 7:20:01 AM
To: pytorch/vision
Cc: Ailing Zhang; Mention
Subject: Re: [pytorch/vision] make C extension lazy-import (#971)
hub supports both, we just have to update logic for them. see my follow-up tasks above.
This makes me wonder if moving the Cpp ops to Pytorch wouldn't be a safer approach?
Fixing a packaging problem is the solution -- not making a mono-repo.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub<#971?email_source=notifications&email_token=ABIBI6XRBAEBGRKGHN2VCV3PX7PBDA5CNFSM4HQ5HKAKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGODWSOQDA#issuecomment-497346572>, or mute the thread<https://github.com/notifications/unsubscribe-auth/ABIBI6XMSVYB4UF7KGR6FQ3PX7PBDANCNFSM4HQ5HKAA>.
|
Summary: This should pass once pytorch/vision#971 is merged. To remove torchvision as baseline, we just compare to sum of all param.sum() in pretrained resnet18 model, which means we need to manually update the number only when that pretrained weights are changed, which is generally rare. Pull Request resolved: #21132 Differential Revision: D15563078 Pulled By: ailzhang fbshipit-source-id: f28c6874149a1e6bd9894402f6847fd18f38b2b7
Just a quick shout that the lazy import mechanism as currently done is likely not a good fit for scriptable ops. |
@t-vi This is a good point. We are currently looking into ways of making some ops scriptable in torchvision, so this is something that we need to fix at some point. |
This allows one to use torchvision models via hub, and defers C extension loading only when you invoke a particular op that calls C extensions.
Arguably, this is worse wrt user experience, and I think we should improve it by atleast invoking lazy-import when ops are used in the constructors (but there's no easy / obvious way to do that for functional bits which are what are written in _C right now).
This also unblocks
torch.hub
to be able to load models without failing on_C
loading, because the models themselves are self-contained.Follow-up tasks wrt torch.hub are:
Follow-up tasks wrt _C loading are:
if hasattr(_C, "CUDA_VERSION")
on something likelibcudart.so.9.0 not found
cc: @ailzhang @fmassa