Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support setting CUDA_VISIBLE_DEVICES env variable #105

Closed
wants to merge 1 commit into from

Conversation

nzmsv
Copy link

@nzmsv nzmsv commented May 19, 2022

When devices are masked out by setting CUDA_VISIBLE_DEVICES we cannot assume that a unique GPU index is available per rank.

While we obviously cannot share GPUs between ranks, there is another use case, exemplified by PyTorch XLA. It will set up CUDA_VISIBLE_DEVICES such that each rank thinks it has a single GPU with index 0, but no ranks share a GPU. See pytorch/xla#1203 (comment) for details.

This patch takes CUDA_VISIBLE_DEVICES into account by fetching the count of available devices from CUDA. PT XLA masking behavior can be emulated for nccl-tests by creating a wrapper that does export CUDA_VISIBLE_DEVICES=$OMPI_COMM_WORLD_RANK and launching it with with OpenMPI's mpirun.

@AddyLaddy
Copy link
Collaborator

Looks good to me. I have been using a similar patch myself. Any comments @sjeaugey ?

@sjeaugey
Copy link
Member

I think I've used another approach, with a NCCL_TESTS_DEVICE env var that just overrides the CUDA device to set. Using a wrapper script setting NCCL_TESTS_DEVICE to OMPI_COMM_WORLD_RANK works just as well, but the GPUs can still see each other.

@nzmsv
Copy link
Author

nzmsv commented May 27, 2022

My goal was to test the case where NCCL can't see all GPUs though. We had a regression in the AWS OFI plugin due to this, and masking off GPUs is a real use case (PT-XLA).

@sjeaugey
Copy link
Member

Ah, right, and in fact I've been using that mechanism to test the use of CUDA_VISIBLE_DEVICES as well, setting CUDA_VISIBLE_DEVICES=X and NCCL_TESTS_DEVICE=0.

But making the NCCL perf tests capable of running inside such an environment even for multiple GPUs per rank would be nice indeed.


for (int i=0; i<args->nGpus; i++) {
int gpuid = args->localRank*args->nThreads*args->nGpus + args->thread*args->nGpus + i;
CUDACHECK(cudaSetDevice(gpuid));
CUDACHECK(cudaSetDevice(gpuid % nGpusVisible));
int rank = ((args->proc*args->nThreads + args->thread)*args->nGpus + i);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why do we need those modifications to each test?

Copy link

@rashikakheria rashikakheria Jul 1, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These are required when CUDA_VISIBLE_DEVICES are set, right? From the documentation - 1) for cudaSetDevice,

Sets device as the current device for the calling host thread. Valid device id's are 0 to ([cudaGetDeviceCount()]

Since, CUDA_VISIBLE_DEVICES=X would lead to cudaDeviceCount set to 1, we will require this change in every collective to set the right GPU for kernel operations.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Understood. That code is old; we should not re-compute the gpuid all around the code and store it in args. Also it's a bit of a hack to use modulo. It works for your use cases but it has other side effects.
For example, if you run with 2 ranks, and set CUDA_VISIBLE_DEVICES to "0,1,2,3" on one rank and "4,5,6,7" and the other rank, then launch with only 2 GPUs per rank, users would expect to use 0,1 and 4,5 but instead we'd use 0,1 and 6,7.
Also, today if you run on a 4 GPUs system and launch 8 ranks or one rank with -g 8 it will error out as "invalid CUDA numeral" or something similar. With this patch it would try to re-use each GPU twice (and generate and error later during NCCL init).

I think the reason for using another environment variable (e.g. NCCL_TESTS_DEVICE) is to indicate whether the local ranks should work within the global CUDA_VISIBLE_DEVICES and ensure they use different GPUs within that set (default) or consider the visible GPUs are dedicated to them and start from GPU 0. We could compute the GPUs are NCCL_TESTS_DEVICE+i and that would probably work as well.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants