Skip to content

Commit

Permalink
Nightly: do test install with the dependencies better and skip CUDA t…
Browse files Browse the repository at this point in the history
…ests on cpu only box
  • Loading branch information
dzhulgakov committed Jan 9, 2023
1 parent e2e4542 commit b7cc1d7
Show file tree
Hide file tree
Showing 3 changed files with 25 additions and 25 deletions.
4 changes: 4 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,3 +10,7 @@ Folders:
- **windows** : scripts to build Windows wheels
- **cron** : scripts to drive all of the above scripts across multiple configurations together
- **analytics** : scripts to pull wheel download count from our AWS s3 logs

## Testing

In order to test build triggered by PyTorch repo's GitHub actions see [these instructions](https://github.com/pytorch/pytorch/blob/master/.github/scripts/README.md#testing-pytorchbuilder-changes)
7 changes: 6 additions & 1 deletion conda/build_pytorch.sh
Original file line number Diff line number Diff line change
Expand Up @@ -388,7 +388,12 @@ for py_ver in "${DESIRED_PYTHON[@]}"; do

# Install the built package and run tests, unless it's for mac cross compiled arm64
if [[ -z "$CROSS_COMPILE_ARM64" ]]; then
conda install -y "$built_package"
# Install the package as if from local repo instead of tar.bz2 directly in order
# to trigger runtime dependency installation. See https://github.com/conda/conda/issues/1884
# Notes:
# - pytorch-nightly is included to install torchtriton
# - nvidia is included for cuda builds, there's no harm in listing the channel for cpu builds
conda install -y -c "file://$PWD/$output_folder" pytorch==$PYTORCH_BUILD_VERSION -c pytorch -c numba/label/dev -c pytorch-nightly -c nvidia

echo "$(date) :: Running tests"
pushd "$pytorch_rootdir"
Expand Down
39 changes: 15 additions & 24 deletions run_tests.sh
Original file line number Diff line number Diff line change
Expand Up @@ -72,21 +72,6 @@ fi

# Environment initialization
if [[ "$package_type" == conda || "$(uname)" == Darwin ]]; then
# Why are there two different ways to install dependencies after installing an offline package?
# The "cpu" conda package for pytorch doesn't actually depend on "cpuonly" which means that
# when we attempt to update dependencies using "conda update --all" it will attempt to install
# whatever "cudatoolkit" your current computer relies on (which is sometimes none). When conda
# tries to install this cudatoolkit that correlates with your current hardware it will also
# overwrite the currently installed "local" pytorch package meaning you aren't actually testing
# the right package.
# TODO (maybe): Make the "cpu" package of pytorch depend on "cpuonly"
if [[ "$cuda_ver" = 'cpu' ]]; then
# Installing cpuonly will also install dependencies as well
retry conda install -y -c pytorch cpuonly
else
# Install dependencies from installing the pytorch conda package offline
retry conda update -yq --all -c defaults -c pytorch -c numba/label/dev
fi
# Install the testing dependencies
retry conda install -yq future hypothesis ${NUMPY_PACKAGE} ${PROTOBUF_PACKAGE} pytest setuptools six typing_extensions pyyaml
else
Expand Down Expand Up @@ -140,15 +125,21 @@ python -c "import torch; exit(0 if torch.__version__ == '$expected_version' else

# Test that CUDA builds are setup correctly
if [[ "$cuda_ver" != 'cpu' ]]; then
# Test CUDA archs
echo "Checking that CUDA archs are setup correctly"
timeout 20 python -c 'import torch; torch.randn([3,5]).cuda()'

# These have to run after CUDA is initialized
echo "Checking that magma is available"
python -c 'import torch; torch.rand(1).cuda(); exit(0 if torch.cuda.has_magma else 1)'
echo "Checking that CuDNN is available"
python -c 'import torch; exit(0 if torch.backends.cudnn.is_available() else 1)'
cuda_installed=1
nvidia-smi || cuda_installed=0
if [[ "$cuda_installed" == 0 ]]; then
echo "Skip CUDA tests for machines without a Nvidia GPU card"
else
# Test CUDA archs
echo "Checking that CUDA archs are setup correctly"
timeout 20 python -c 'import torch; torch.randn([3,5]).cuda()'

# These have to run after CUDA is initialized
echo "Checking that magma is available"
python -c 'import torch; torch.rand(1).cuda(); exit(0 if torch.cuda.has_magma else 1)'
echo "Checking that CuDNN is available"
python -c 'import torch; exit(0 if torch.backends.cudnn.is_available() else 1)'
fi
fi

# Check that OpenBlas is not linked to on Macs
Expand Down

0 comments on commit b7cc1d7

Please sign in to comment.