-
Notifications
You must be signed in to change notification settings - Fork 32
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
random generators compatibility with parallel processing #513
Conversation
The first step is the implementation of RNGs with an external state variable. 3 are implemented (note the
3 equivalent generators use an internal state variable, for compatibility:
The state variable is a private type, and there are also 2 functions to initialise it (set a new seed):
Note that the "seed" is now a set of two 64bit integers. The idea is that Up to now, except that the RNG is no more reinitialised at the beginning of |
Now how to use that:
There may be many other solutions, please feel free to propose ! |
Hello @lfarv , thanks for looking into this, what you propose seems good to me. I personally would go for the simpler option 1. I do not see what would be the practical advantages of handling everything in the integrator, is there a good reason to push for that or is it just to maintain the present strategy? The identification of the threads of the python multiprocessing has to be done in the python and passed to |
Hi @swhite2401 : I also have a preference for option 1: it makes integrators simpler, and easier allows for resetting the seeds, though it's not obvious ( Following you proposal, I'll start in this branch to experiment with option 1, and try to set up "test integrators" to check the behaviour. |
Sounds good. How would you check the sync condition? You would count the number of calls? The check would be useful only for the python multiprocessing, in the case of MPI we can check this during the tracking as processes can communicate with each other. Race conditions are a tricky problem...however a simple solution would be to not implement openMP in the variable multipole element, this is just one element after all so the performance loss may not be so significant. |
No, after tracking is finished I would ask for another random value and check they are equal in all threads! In the mean time, I found an unexpected behaviour of This is far from optimal, and it makes very difficult to have all processes follow a single stream of random numbers for the variable multipole. So I'll open another PR to solve this first. |
I also found another point to be clarified in So we have to also work on the python part of AT. I don't know if there are other uses to random numbers besides at.tracking.particles.beam()`… |
3e7383a
to
c7489de
Compare
0565619
to
efd7587
Compare
Here is a test showing the behaviour of the 2 python RNGs for common or thread-specific random number streams. The new from at import random, DConstant
from mpi4py import MPI
def run():
comm = MPI.COMM_WORLD
size = comm.Get_size()
rank = comm.Get_rank()
ismpi = DConstant.mpi
print("MPI: {}, rank: {}/{}, common:{}, thread:{}".format(ismpi, rank, size,
random.common.random(),
random.thread.random()))
if __name__ == '__main__':
run() the results is: (mpi39) bijou:MPI $ mpiexec -n 4 python test_atrand.py
MPI: True, rank: 2/4, common:0.8699988509120198, thread:0.3919583896455968
MPI: True, rank: 3/4, common:0.8699988509120198, thread:0.8189201849105845
MPI: True, rank: 0/4, common:0.8699988509120198, thread:0.37707540876459267
MPI: True, rank: 1/4, common:0.8699988509120198, thread:0.141339263238754 The |
Similarly, 2 random generators are available to C integrators: the
As examples, quantum diffusion elements should use the These generators are available with MPI, python multiprocessing and OpenMP. |
This PR is ready for merging. It does not change any integrator nor the |
Hello @lfarv, seems fine, I think we can start disseminating this in the passmethods! |
dd168e4
to
538206e
Compare
The simple random number generators (RNG) used up to now are not compatible with parallel processing: the streams of numbers in the different threads are likely to be identical. We'll try here to solve this problem.