Skip to content

Commit

Permalink
fix(//notebooks): Fix WORKSPACE template file to reflect new build sy…
Browse files Browse the repository at this point in the history
…stem layout

Signed-off-by: Naren Dasan <[email protected]>
Signed-off-by: Naren Dasan <[email protected]>
  • Loading branch information
narendasan committed Jul 16, 2020
1 parent 73d804b commit c8ea9b7
Show file tree
Hide file tree
Showing 3 changed files with 72 additions and 48 deletions.
24 changes: 24 additions & 0 deletions notebooks/Dockerfile.notebook
Original file line number Diff line number Diff line change
@@ -0,0 +1,24 @@
FROM nvcr.io/nvidia/pytorch:20.03-py3

RUN apt update && apt install curl gnupg
RUN curl https://bazel.build/bazel-release.pub.gpg | apt-key add -
RUN echo "deb [arch=amd64] https://storage.googleapis.com/bazel-apt stable jdk1.8" | tee /etc/apt/sources.list.d/bazel.list

RUN apt update && apt install bazel-3.3.1
RUN ln -s /usr/bin/bazel-3.3.1 /usr/bin/bazel

RUN pip install pillow==4.3.0
RUN pip install torch==1.5.1
RUN pip install torchvision==0.6.1

COPY . /workspace/TRTorch
RUN rm /workspace/TRTorch/WORKSPACE
COPY ./notebooks/WORKSPACE.notebook /workspace/TRTorch/WORKSPACE

WORKDIR /workspace/TRTorch
RUN bazel build //:libtrtorch --compilation_mode opt

WORKDIR /workspace/TRTorch/py
RUN python3 setup.py install

WORKDIR /workspace/TRTorch/notebooks
10 changes: 5 additions & 5 deletions notebooks/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,24 +3,24 @@ This folder contains demo notebooks for TRTorch.

## 1. Requirements

The most convenient way to run these notebooks is via a docker container, which provides a self-contained, isolated and re-producible environment for all experiments.
The most convenient way to run these notebooks is via a docker container, which provides a self-contained, isolated and re-producible environment for all experiments.

First, clone the repository:

```
git clone https://github.com/NVIDIA/TRTorch
```

Next, build the NVIDIA TRTorch container:
Next, build the NVIDIA TRTorch container (from repo root):

```
docker build -t trtorch -f Dockerfile.notebook .
docker build -t trtorch -f notebooks/Dockerfile.notebook .
```

Then launch the container with:

```
docker run --runtime=nvidia -it --rm --ipc=host --net=host trtorch
docker run --runtime=nvidia -it --rm --ipc=host --net=host trtorch
```

Within the docker interactive bash session, start Jupyter with
Expand All @@ -38,7 +38,7 @@ in, for example:
```http://[host machine]:8888/?token=aae96ae9387cd28151868fee318c3b3581a2d794f3b25c6b```


Within the container, this notebook itself is located at `/workspace/TRTorch/notebooks`.
Within the container, this notebooks itself is located at `/workspace/TRTorch/notebooks`.

## 2. Notebook list

Expand Down
86 changes: 43 additions & 43 deletions notebooks/WORKSPACE.notebook
Original file line number Diff line number Diff line change
Expand Up @@ -25,29 +25,65 @@ http_archive(
load("@rules_pkg//:deps.bzl", "rules_pkg_dependencies")
rules_pkg_dependencies()

git_repository(
name = "googletest",
remote = "https://github.com/google/googletest",
commit = "703bd9caab50b139428cea1aaff9974ebee5742e",
shallow_since = "1570114335 -0400"
)

# CUDA should be installed on the system locally
new_local_repository(
name = "cuda",
path = "/usr/local/cuda-10.2/targets/x86_64-linux/",
path = "/usr/local/cuda-10.2/",
build_file = "@//third_party/cuda:BUILD",
)

new_local_repository(
name = "cublas",
path = "/usr",
build_file = "@//third_party/cublas:BUILD",
)

#############################################################################################################
# Tarballs and fetched dependencies (default - use in cases when building from precompiled bin and tarballs)
#############################################################################################################

http_archive(
name = "libtorch_pre_cxx11_abi",
name = "libtorch",
build_file = "@//third_party/libtorch:BUILD",
strip_prefix = "libtorch",
sha256 = "ea8de17c5f70015583f3a7a43c7a5cdf91a1d4bd19a6a7bc11f074ef6cd69e27",
urls = ["https://download.pytorch.org/libtorch/cu102/libtorch-shared-with-deps-1.5.0.zip"],
urls = ["https://download.pytorch.org/libtorch/cu102/libtorch-cxx11-abi-shared-with-deps-1.5.1.zip"],
sha256 = "cf0691493d05062fe3239cf76773bae4c5124f4b039050dbdd291c652af3ab2a"
)

http_archive(
name = "libtorch",
name = "libtorch_pre_cxx11_abi",
build_file = "@//third_party/libtorch:BUILD",
strip_prefix = "libtorch",
urls = ["https://download.pytorch.org/libtorch/cu102/libtorch-cxx11-abi-shared-with-deps-1.5.0.zip"],
sha256 = "0efdd4e709ab11088fa75f0501c19b0e294404231442bab1d1fb953924feb6b5"
sha256 = "818977576572eadaf62c80434a25afe44dbaa32ebda3a0919e389dcbe74f8656",
urls = ["https://download.pytorch.org/libtorch/cu102/libtorch-shared-with-deps-1.5.1.zip"],
)

####################################################################################
# Locally installed dependencies (use in cases of custom dependencies or aarch64)
####################################################################################

new_local_repository(
name = "cudnn",
path = "/usr/",
build_file = "@//third_party/cudnn/local:BUILD"
)

new_local_repository(
name = "tensorrt",
path = "/usr/",
build_file = "@//third_party/tensorrt/local:BUILD"
)

#########################################################################
# Testing Dependencies (optional - comment out on aarch64)
#########################################################################
pip3_import(
name = "trtorch_py_deps",
requirements = "//py:requirements.txt"
Expand All @@ -64,39 +100,3 @@ pip3_import(
load("@py_test_deps//:requirements.bzl", "pip_install")
pip_install()

## Downloaded distributions to use with --distdir
#http_archive(
# name = "cudnn",
# urls = ["https://developer.nvidia.com/compute/machine-learning/cudnn/secure/7.6.5.32/Production/10.2_20191118/cudnn-10.2-linux-x64-v7.6.5.32.tgz"],
# build_file = "@//third_party/cudnn/archive:BUILD",
# sha256 = "600267f2caaed2fd58eb214ba669d8ea35f396a7d19b94822e6b36f9f7088c20",
# strip_prefix = "cuda"
#)

#http_archive(
# name = "tensorrt",
# urls = ["https://developer.nvidia.com/compute/machine-learning/tensorrt/secure/7.0/7.0.0.11/tars/TensorRT-7.0.0.11.Ubuntu-18.04.x86_64-gnu.cuda-10.2.cudnn7.6.tar.gz"],
# build_file = "@//third_party/tensorrt/archive:BUILD",
# sha256 = "c7d73b2585b18aae68b740249efa8c8ba5ae852abe9a023720595432a8eb4efd",
# strip_prefix = "TensorRT-7.0.0.11"
#)

# Locally installed dependencies
new_local_repository(
name = "cudnn",
path = "/usr/",
build_file = "@//third_party/cudnn/local:BUILD"
)

new_local_repository(
name = "tensorrt",
path = "/usr/",
build_file = "@//third_party/tensorrt/local:BUILD"
)

git_repository(
name = "googletest",
remote = "https://github.com/google/googletest",
commit = "703bd9caab50b139428cea1aaff9974ebee5742e",
shallow_since = "1570114335 -0400"
)

0 comments on commit c8ea9b7

Please sign in to comment.