-
Notifications
You must be signed in to change notification settings - Fork 509
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Depletion restart with mpi #2778
Depletion restart with mpi #2778
Conversation
Can you add some comments to the code as to why this division by two is necessary only the in the MPI case? Also, why is the length of this doubled when using MPI? |
Hi @gridley , `if comm.size != 1:
prev_results = self.prev_res
self.prev_res = Results()
mat_indexes = _distribute(range(len(self.burnable_mats)))
for res_obj in prev_results:
new_res = res_obj.distribute(self.local_mats, mat_indexes)
self.prev_res.append(new_res)`
that appends `new_res` objects twice. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there an implied bug here where we double the size of the depletion results if we're doing MPI? The logic you pointed out in
for res_obj in prev_results:
new_res = res_obj.distribute(self.local_mats, mat_indexes)
self.prev_res.append(new_res)
is that intentional or a bug or is it doing something we're not quite aware of?
@drewejohnson @gridley , I think I've figured out what's happening here: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just a style fix and this should be good to go. Sorry for the delay on this!
Co-authored-by: Paul Romano <[email protected]>
@drewejohnson Any further comments or do you approve of the change here? |
Thanks for your patience! |
Co-authored-by: Paul Romano <[email protected]>
Description
This PR fixes an inconsistency when restarting a depletion simulation with mpi enabled, due to a call to
openmc/deplete/coupled_operator._load_previous_results()
method fromopenmc/deplete/abc._get_start_data()
, which double the size of depletion results file and start appending new ones from there.To reproduce simply run:
mpirun -n 2 --bind-to numa --map-by numa python -m mpi4py run_depletion.py
and
mpirun -n 2 --bind-to numa --map-by numa python -m mpi4py restart_depletion.py
from the pincell_depletion example
Checklist