Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix: Segfault fix for Benchmarks #2432

Merged
merged 2 commits into from
Nov 3, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
12 changes: 6 additions & 6 deletions .circleci/config.yml
Original file line number Diff line number Diff line change
Expand Up @@ -109,7 +109,7 @@ commands:
sudo docker run --rm --runtime=nvidia --gpus all nvidia/cuda:11.6.2-base-ubuntu20.04 nvidia-smi

install-cudnn:
description: "Install CUDNN 8.8.1"
description: "Install CUDNN 8.9.5"
parameters:
os:
type: string
Expand All @@ -119,7 +119,7 @@ commands:
default: "x86_64"
cudnn-version:
type: string
default: "8.8.1.3"
default: "8.9.5.30"
cuda-version:
type: string
default: "cuda12.0"
Expand Down Expand Up @@ -198,7 +198,7 @@ commands:
default: "cuda12.0"
cudnn-version:
type: string
default: "8.8.1.3"
default: "8.9.5.30"
trt-version-short:
type: string
default: "8.6.1"
Expand Down Expand Up @@ -246,7 +246,7 @@ commands:
default: "8.6.1"
cudnn-version-long:
type: string
default: "8.8.1.3"
default: "8.9.5.30"
steps:
- run:
name: Set up python environment
Expand Down Expand Up @@ -1460,7 +1460,7 @@ parameters:
default: "https://download.pytorch.org/whl/nightly/cu121"
cudnn-version:
type: string
default: "8.8.1.3"
default: "8.9.5.30"
trt-version-short:
type: string
default: "8.6.1"
Expand All @@ -1483,7 +1483,7 @@ parameters:
default: "https://download.pytorch.org/whl/cu117"
cudnn-version-legacy:
type: string
default: "8.8.1.3"
default: "8.9.5.30"
trt-version-short-legacy:
type: string
default: "8.6.1"
Expand Down
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -118,7 +118,7 @@ These are the following dependencies used to verify the testcases. Torch-TensorR
- Bazel 5.2.0
- Libtorch 2.2.0.dev (latest nightly) (built with CUDA 12.1)
- CUDA 12.1
- cuDNN 8.8.1
- cuDNN 8.9.5
- TensorRT 8.6.1

## Prebuilt Binaries and Wheel files
Expand Down
6 changes: 3 additions & 3 deletions WORKSPACE
Original file line number Diff line number Diff line change
Expand Up @@ -71,10 +71,10 @@ http_archive(
http_archive(
name = "cudnn",
build_file = "@//third_party/cudnn/archive:BUILD",
sha256 = "79d77a769c7e7175abc7b5c2ed5c494148c0618a864138722c887f95c623777c",
strip_prefix = "cudnn-linux-x86_64-8.8.1.3_cuda12-archive",
sha256 = "2a2eb89a2ab51071151c6082f1e816c702167a711a9372f9f73a7b5c4b06e01a",
strip_prefix = "cudnn-linux-x86_64-8.9.5.30_cuda12-archive",
urls = [
"https://developer.nvidia.com/downloads/compute/cudnn/secure/8.8.1/local_installers/12.0/cudnn-linux-x86_64-8.8.1.3_cuda12-archive.tar.xz",
"https://developer.nvidia.com/downloads/compute/cudnn/secure/8.9.5/local_installers/12.x/cudnn-linux-x86_64-8.9.5.30_cuda12-archive.tar.xz",
],
)

Expand Down
2 changes: 1 addition & 1 deletion dev_dep_versions.yml
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
__version__: "2.2.0.dev0"
__cuda_version__: "12.1"
__cudnn_version__: "8.8"
__cudnn_version__: "8.9"
__tensorrt_version__: "8.6"
4 changes: 2 additions & 2 deletions docker/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,14 +17,14 @@ Note: By default the container uses the `pre-cxx11-abi` version of Torch + Torch

### Instructions

- The example below uses CUDNN 8.8 and TensorRT 8.6
- The example below uses CUDNN 8.9 and TensorRT 8.6
- See <a href="https://github.com/pytorch/TensorRT#dependencies">dependencies</a> for a list of current default dependencies.

> From root of Torch-TensorRT repo

Build:
```
DOCKER_BUILDKIT=1 docker build --build-arg TENSORRT_VERSION=8.6 --build-arg CUDNN_VERSION=8.8 -f docker/Dockerfile -t torch_tensorrt:latest .
DOCKER_BUILDKIT=1 docker build --build-arg TENSORRT_VERSION=8.6 --build-arg CUDNN_VERSION=8.9 -f docker/Dockerfile -t torch_tensorrt:latest .
```

Run:
Expand Down
7 changes: 2 additions & 5 deletions tools/perf/perf_run.py
Original file line number Diff line number Diff line change
Expand Up @@ -7,14 +7,14 @@
import time
import timeit
import warnings
from functools import wraps

import numpy as np
import pandas as pd
import tensorrt as trt

# Importing supported Backends
import torch
import torch.backends.cudnn as cudnn
from utils import (
BENCHMARK_MODELS,
parse_backends,
Expand All @@ -30,6 +30,7 @@


def run_with_try_except(func):
@wraps(func)
def wrapper_func(*args, **kwargs):
try:
return func(*args, **kwargs)
Expand Down Expand Up @@ -527,7 +528,6 @@ def recordStats(backend, timings, precision, batch_size=1, compile_time_s=None):
)
args = arg_parser.parse_args()

cudnn.benchmark = True
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@narendasan - this line causes a segfault at inference time, when we compile the Docker container with CUDNN 8.8, and Torch 2.1.0 uses the CUDNN 8.9 Python distributions. When removed, inference works as expected.

Do you think it would be necessary/important to upgrade the build stack to CUDNN 8.9 for the upcoming release?

# Create random input tensor of certain size
torch.manual_seed(12345)
model_name = "Model"
Expand All @@ -542,9 +542,6 @@ def recordStats(backend, timings, precision, batch_size=1, compile_time_s=None):
if os.path.exists(model_name):
print("Loading user provided torchscript model: ", model_name)
model = torch.jit.load(model_name).cuda().eval()
elif model_name in BENCHMARK_MODELS:
print("Loading torchscript model from BENCHMARK_MODELS for: ", model_name)
model = BENCHMARK_MODELS[model_name]["model"].eval().cuda()

# Load PyTorch Model, if provided
if len(model_name_torch) > 0 and os.path.exists(model_name_torch):
Expand Down
4 changes: 0 additions & 4 deletions tools/perf/utils.py
Original file line number Diff line number Diff line change
@@ -1,12 +1,8 @@
from typing import Optional, Sequence, Union

import custom_models as cm
import timm
import torch
import torchvision.models as models

import torch_tensorrt

BENCHMARK_MODEL_NAMES = {
"vgg16",
"alexnet",
Expand Down
Loading