This repository has been archived by the owner on Dec 18, 2020. It is now read-only.
Releases: ospray/module_mpi
Releases · ospray/module_mpi
OSPRay MPI v2.4.0
- Significant improvements have been made to loading performance in
the MPI Offload device. Applications which make large numbers of API
calls or create many smaller geometries or volumes should see
substantial load time improvements. - Strided data arrays are compacted on the app rank before sending
OSPRay MPI v2.2.0
- Improve parallelism of framebuffer compression & decompression when
collecting the final framebuffer to the head rank. This provides a
substantial performance improvement when using just a few ranks or
large framebuffers (both in pixel count or channel count). - The MPI module will now default to setting thread affinity off, if
no option is selected. This improves thread usage and core
assignment of threads in most cases, where no specific options are
provided to the MPI runtime. - Fix bug where OSPObject handles where not translated to worker-local
pointers when committing an OSPData in the MPIOffloadDevice. - Fix handling of
OSP_STRING
parameters - Move from
ospcommon
torkcommon
v1.4.2
OSPRay MPI v2.1.0
- Add support for
ospGetTaskDuration
to query the render time of a
asynchronous (or synchronous) renderFrame call - Use flush bcasts to allow us to use non-owning views for data
transfer. Note that sharedospData
with strides is currently
transmitted as whole - Fix member variable type for bcast
- Fix incorrect data size computation in
offload
device - Fix large data chunking support for MPI Bcast
OSPRay MPI v2.0.1
This release of the MPI module requires OSPRay 2.0.1 or higher
- The MPI module is now built outside of OSPRay source, and can be built as part of the super build
This is the first release of the MPI module for OSPRay v2.x, and updates the API to match:
- Asynchronous rendering is now supported in both distributed and offload devices
- The regions parameter on the
OSPWorld
has been simplified to be a list of boxes. Sharing data is now determined by two ranks specifying the same bounding box - When using the distributed device, if all ranks specify the exact same data the renderer will switch to perform image-parallel rendering, enabling use of the path tracer see ospMPIDistributedTutorialReplicatedData
See the README and tutorials for more information on updates to the API, command line options and object parameters.