Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Explore bumping up the versions of torch and torchvision we use on the MIRAv2 endpoint #109

Open
nathanielrindlaub opened this issue Apr 7, 2023 · 1 comment
Assignees

Comments

@nathanielrindlaub
Copy link
Member

nathanielrindlaub commented Apr 7, 2023

PyTorch started bundling GPU and CPU code bases somewhere down the line, but we can download ONLY the CPU version with:

pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cpu for the latest versions or pip install torch==1.13.1+cpu torchvision==0.14.1+cpu torchaudio==0.13.1 --extra-index-url https://download.pytorch.org/whl/cpu for older.

Previously when we had tried using more recent version of torch and torch vision it had ballooned the image to >9GB (from ~3GB previously) which was causing timeout issues. However, we suspect that was because we were also unwittingly loading the GPU code. We'd want to update the Dockerfile and likely update the version's we are running locally during torchscript model compilation and re-compile and test.

While we're at it, might as well as try running newer versions of torchserve: https://hub.docker.com/r/pytorch/torchserve/tags?page=1

@nathanielrindlaub nathanielrindlaub self-assigned this Apr 7, 2023
@nathanielrindlaub
Copy link
Member Author

Could give compiling to ONNX a shot too? With torch 2.0 it looks like it may be as simple as torch.onnx.export: https://pytorch.org/tutorials/advanced/super_resolution_with_onnxruntime.html

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant