You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
For benchmarking GPU nodes, one would want to scale across 1, 2, .... GPUs of a single node and then over multiple nodes. Some simulations might produce higher cumulative performance if simulations run on single GPUs or if simulations are only parallelized over some of the GPUs of a single node. Ideal scaling should go through the origin and a single GPU in this case.
Actual behavior
MDBenchmark only scales across different number of nodes. Thus, results can be misleading as explained above.
Your version of MDBenchmark:
(run mdbenchmark --version on your machine)
The text was updated successfully, but these errors were encountered:
The left axis shows the individual (per system, blue circles) and cumulative performance (green crosses), while the left axis shows the performance relative to the 1GPU & 1 system case. Notes:
Gromacs requires ranks to be a multiple of the number of simulations. Here, they were set equal.
In all cases, the maximum allowed threads gave the best performance. Still, the optimal combination should be tested.
All possible interactions of the Martini test system (no PME!) were offloaded to gpu using srun gmx_mpi mdrun -nb gpu -bonded gpu -v -maxh 0.25 -resethway -noconfout -deffnm topol
Expected behavior
For benchmarking GPU nodes, one would want to scale across 1, 2, .... GPUs of a single node and then over multiple nodes. Some simulations might produce higher cumulative performance if simulations run on single GPUs or if simulations are only parallelized over some of the GPUs of a single node. Ideal scaling should go through the origin and a single GPU in this case.
Actual behavior
MDBenchmark only scales across different number of nodes. Thus, results can be misleading as explained above.
Your version of MDBenchmark:
(run
mdbenchmark --version
on your machine)The text was updated successfully, but these errors were encountered: