Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

merge_subgroup_data #1192

Merged
merged 2 commits into from
Apr 29, 2020
Merged

merge_subgroup_data #1192

merged 2 commits into from
Apr 29, 2020

Conversation

smartalecH
Copy link
Collaborator

@smartalecH smartalecH commented Apr 23, 2020

Closes #1178

Adds a merge_subgroup_data method. Takes a numpy array (int, float, complex, etc) and returns a new array with an extra dimension corresponding to the concatenated input from each subgroup.

An example script test.py:

import meep as mp
import numpy as np

num_groups = 3
a = np.zeros((5,7),dtype=np.complex128)

n = mp.divide_parallel_processes(num_groups)
a[:] = (1+n) + 1j*(1+n)

b = mp.merge_subgroup_data(a,num_groups)
print(b.shape)
print(b)

Execute using:

mpirun -n 6 python3 test.py

Produces the expected output:

(5, 7, 3)
[[[1.+1.j 2.+2.j 3.+3.j]
  [1.+1.j 2.+2.j 3.+3.j]
  [1.+1.j 2.+2.j 3.+3.j]
  [1.+1.j 2.+2.j 3.+3.j]
  [1.+1.j 2.+2.j 3.+3.j]
  [1.+1.j 2.+2.j 3.+3.j]
  [1.+1.j 2.+2.j 3.+3.j]]

 [[1.+1.j 2.+2.j 3.+3.j]
  [1.+1.j 2.+2.j 3.+3.j]
  [1.+1.j 2.+2.j 3.+3.j]
  [1.+1.j 2.+2.j 3.+3.j]
  [1.+1.j 2.+2.j 3.+3.j]
  [1.+1.j 2.+2.j 3.+3.j]
  [1.+1.j 2.+2.j 3.+3.j]]

 [[1.+1.j 2.+2.j 3.+3.j]
  [1.+1.j 2.+2.j 3.+3.j]
  [1.+1.j 2.+2.j 3.+3.j]
  [1.+1.j 2.+2.j 3.+3.j]
  [1.+1.j 2.+2.j 3.+3.j]
  [1.+1.j 2.+2.j 3.+3.j]
  [1.+1.j 2.+2.j 3.+3.j]]

 [[1.+1.j 2.+2.j 3.+3.j]
  [1.+1.j 2.+2.j 3.+3.j]
  [1.+1.j 2.+2.j 3.+3.j]
  [1.+1.j 2.+2.j 3.+3.j]
  [1.+1.j 2.+2.j 3.+3.j]
  [1.+1.j 2.+2.j 3.+3.j]
  [1.+1.j 2.+2.j 3.+3.j]]

 [[1.+1.j 2.+2.j 3.+3.j]
  [1.+1.j 2.+2.j 3.+3.j]
  [1.+1.j 2.+2.j 3.+3.j]
  [1.+1.j 2.+2.j 3.+3.j]
  [1.+1.j 2.+2.j 3.+3.j]
  [1.+1.j 2.+2.j 3.+3.j]
  [1.+1.j 2.+2.j 3.+3.j]]]

python/simulation.py Outdated Show resolved Hide resolved
python/simulation.py Outdated Show resolved Hide resolved
python/simulation.py Outdated Show resolved Hide resolved
python/simulation.py Outdated Show resolved Hide resolved
python/simulation.py Outdated Show resolved Hide resolved
@smartalecH
Copy link
Collaborator Author

smartalecH commented Apr 27, 2020

New test script:

import meep as mp
import numpy as np

num_groups = 3
a = np.zeros((5,7),dtype=np.complex128)

n = mp.divide_parallel_processes(num_groups)
a[:] = (1+n) + 1j*(1+n)

b = mp.merge_subgroup_data(a)
print(b.shape)
print(b)

Produces the same output as above, but now more efficiently (no more overwriting redundant data).

I also added helper functions that return the number of groups (get_num_groups()) and an array of group master ranks (get_group_masters()).

@stevengj stevengj merged commit 309a6cb into NanoComp:master Apr 29, 2020
@stevengj
Copy link
Collaborator

You can put a test in the documentation PR.

@zlin-opt
Copy link

zlin-opt commented Jul 18, 2020

@stevengj @smartalecH
If the num_groups does not divide the total number of processes, what happens?
do the extra processors get assigned to the first group?
for example, num_groups=3 and total procs is 13, then the first group has 5 procs, the second 4 and the third 4?

@oskooi
Copy link
Collaborator

oskooi commented Jul 18, 2020

Yes, those three groupings are correct with the ranks of each process divided into (0-4), (5-8), and (9-12).

The relevant line in the function divide_parallel_processes which assigns the group number to each process (based on its rank) is:

int mygroup = (my_rank() * numgroups) / count_processors();

bencbartlett pushed a commit to bencbartlett/meep that referenced this pull request Sep 9, 2021
* add merge_subgroup_data

* fixed issues

Co-authored-by: Alec Hammond <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Using MPI across multiple sims
4 participants