Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cluster job that spawns its own processes for use with DDP #2408

Closed
jgbos opened this issue Jun 29, 2020 · 2 comments
Closed

Cluster job that spawns its own processes for use with DDP #2408

jgbos opened this issue Jun 29, 2020 · 2 comments
Assignees
Labels
feature Is an improvement or enhancement help wanted Open to be worked on won't fix This will not be worked on

Comments

@jgbos
Copy link
Contributor

jgbos commented Jun 29, 2020

🚀 Feature

Not sure if the title is appropriate. This feature would support the use case where

  • The manager sets MASTER_ADDR and MASTER_PORT
  • User knows how to set LOCAL_RANK, GLOBAL_RANK, and WORLD_SIZE
  • Each node has N_g GPUs
  • N_j jobs are spawned (in my case, MPI on SLURM) for each gpu, i.e., world_size= N_j * N_g
  • Each job can see both GPUs on each node, i.e., local_rank = global_rank % N_g and torch.cuda.set_device(local_rank)

Motivation

I'm able to write a class that overrides pl.Trainer to support this, but thought 1) this might be a use case for others and 2) I'd prefer not to override your code as much as possible. Here is the sbatch file header

#!/bin/bash 
#SBATCH --job-name job
#SBATCH -o jobs/%j.log 
#SBATCH -N 4 
#SBATCH --tasks-per-node=2 
#SBATCH --partition=gaia 
#SBATCH --gres=gpu:volta:2 

export MASTER_ADDR=$(hostname -s) 
export MASTER_PORT=$(python -c 'import socket; s=socket.socket(); s.bind(("", 0)); print(s.getsockname()[1]); s.close()') 

mpirun <options> <command>

Each job sees 2 GPUs (and the device ids are not integers, another issue). To setup my run I set the following environment variables:

global_rank, world_size, hostname = get_dist_env()
os.environ['WORLD_SIZE'] = f'{world_size}'
os.environ['NODE_RANK'] = f'{global_rank}'
os.environ['LOCAL_RANK'] = f'{global_rank % 2}'

where get_dist_env knows how to get world_size and global_rank from the environment. For mpirun this is

world_size = int(os.getenv('OMPI_COMM_WORLD_SIZE'))
global_rank = int(os.getenv('OMPI_COMM_WORLD_RANK'))

With those variables (which I think are standard in your code) I should be able to run in DDP mode. Yet, the issue is, because each node sees both GPUs, I cannot define a setting in Trainer that will allow this to execute correctly. Either I set num_gpus=1 and the local_rank is not calculated correctly or if I set num_gpus=2 then your code will try to spawn an additional job.

Pitch

I'm not sure what the best API approach is, but if the user sets MASTER_ADDR, MASTER_PORT, WORLD_SIZE, GLOBAL_RANK, and LOCAL_RANK then that should be everything you need to execute a distributed job.

Additional context

I'm clearly not expert in distributed processing so I'm not sure if I'm asking for something that only works on my cluster with my settings and cannot be generalized. In this case, I am able to override Trainer to support my use case without you needing to change anything.

Thanks for a great package!

@jgbos jgbos added feature Is an improvement or enhancement help wanted Open to be worked on labels Jun 29, 2020
@github-actions
Copy link
Contributor

Hi! thanks for your contribution!, great first issue!

@stale
Copy link

stale bot commented Aug 28, 2020

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

@stale stale bot added the won't fix This will not be worked on label Aug 28, 2020
@stale stale bot closed this as completed Sep 6, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feature Is an improvement or enhancement help wanted Open to be worked on won't fix This will not be worked on
Projects
None yet
Development

No branches or pull requests

2 participants