You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
For any kind of proper scalability, we rely on MPI. As it stands, we have separate code on top of our mesh classes that operate with MPI. We use Zoltan2 for node balancing and TPetra for MPI communications.
We should consolidate this into a single layer on top of a mesh class. The availability of out of the box distributed mesh scaling can be significantly valuable for ELEMENTS users.
The distributed mesh would handle:
Node balancing
Process mapping during read/write
MPI Communications
During this process, we should consider separating from TPetra MultiVectors for MPI comms and implement our own for two reasons:
The layout of MultiVectors is contrary to our own data layout for several arrays. This results in substantial over-communication when we have to transpose them. While only a mild slowdown now, there are future methods that we anticipate to take a much bigger hit.
We would like to take advantage of MPI Direct GPU comms in the future, and TPetra is unlikely to support this.
The text was updated successfully, but these errors were encountered:
For any kind of proper scalability, we rely on MPI. As it stands, we have separate code on top of our mesh classes that operate with MPI. We use Zoltan2 for node balancing and TPetra for MPI communications.
We should consolidate this into a single layer on top of a mesh class. The availability of out of the box distributed mesh scaling can be significantly valuable for ELEMENTS users.
The distributed mesh would handle:
During this process, we should consider separating from TPetra MultiVectors for MPI comms and implement our own for two reasons:
The text was updated successfully, but these errors were encountered: