You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The current QBVH implementation requires that the content type be aware of its own index within the QBVH, which means that any consumer needs to manage these values and correlate them directly.
An alternate perspective on this problem is that GenericQbvh isn't generic enough to meaningfully allow implementation of custom storages beyond "It's an array on (device)".
Problem
If I naively map application keys to arbitrary points in the usize space of IndexedData, memory usage explodes due to the current implementation creating a dense vector where the usize-index is the true offset into the vector, so 4 giga-elements are produced and default-initialized.
With the new "update" semantics from #113, it's possible to update the Qbvh instance on the fly, but any gap left in the index space produces areas in the array that are "gaps" of invalid entries. This isn't a problem, and is probably necessary for maintainable yet effective CUDA and SIMD implementations. This also isn't new- simply inserting values with prior
The problem is that correlation to application state that produces a changing space makes it very difficult to assign and manage keys efficiently, and keep them in sync with a non-fixed set of application-level entities.
Workaround
I've taken to using a SlabMap to correlate my application-specific keys to the slab IDs, and using those as the actual IndexedData key. Unfortunately, this means that I then need to produce custom visitors that resolve such IDs back to references to my actual key-type for further correlation back to application-level entities.
I've attempted to write a wrapper for this, but the generic functions to cross-correlate IDs result in a large amount of duplicate signatures from those in GenericQbvh.
Possible solution
If GenericQbvh were made to work in terms of a trait QbvhStorage<LeafData, LeafId> { ... } that was capable of defining the mapping from LeafId to LeafData references or even literal usize offsets into a vector of LeafData, then this dense representation could either be avoided for cases of truly-arbitrary Index values, or could be wrapped with a way of statefully assigning indexes to translate sparse keys to dense storage.
In other words, one of the following two options:
Allow arbitrary keys by providing a means of binding a HashMap representation or similar (may make SIMD implementation difficult)
Make storage generic enough to allow a definition which automatically manages the index space, given an arbitrary Eq + Ord + Hash key-type, while still providing the current method which "cares" which index it is, and assumes the application will manage these values.
I believe option 2 will be the best one to implement, as it maximizes functionality with minimal increase in maintenance cost. This assumes that it is possible to bake the trait-level mapping operations during insert/removal/update operations to maintain a dense storage efficiently without much impact to the SIMD and CUDA implementations.
The text was updated successfully, but these errors were encountered:
The current QBVH implementation requires that the content type be aware of its own index within the QBVH, which means that any consumer needs to manage these values and correlate them directly.
An alternate perspective on this problem is that
GenericQbvh
isn't generic enough to meaningfully allow implementation of custom storages beyond "It's an array on (device)".Problem
If I naively map application keys to arbitrary points in the
usize
space ofIndexedData
, memory usage explodes due to the current implementation creating a dense vector where theusize
-index is the true offset into the vector, so 4 giga-elements are produced and default-initialized.With the new "update" semantics from #113, it's possible to update the
Qbvh
instance on the fly, but any gap left in the index space produces areas in the array that are "gaps" ofinvalid
entries. This isn't a problem, and is probably necessary for maintainable yet effective CUDA and SIMD implementations. This also isn't new- simplyinsert
ing values with priorThe problem is that correlation to application state that produces a changing space makes it very difficult to assign and manage keys efficiently, and keep them in sync with a non-fixed set of application-level entities.
Workaround
I've taken to using a
SlabMap
to correlate my application-specific keys to the slab IDs, and using those as the actualIndexedData
key. Unfortunately, this means that I then need to produce custom visitors that resolve such IDs back to references to my actual key-type for further correlation back to application-level entities.I've attempted to write a wrapper for this, but the generic functions to cross-correlate IDs result in a large amount of duplicate signatures from those in
GenericQbvh
.Possible solution
If
GenericQbvh
were made to work in terms of atrait QbvhStorage<LeafData, LeafId> { ... }
that was capable of defining the mapping fromLeafId
toLeafData
references or even literalusize
offsets into a vector of LeafData, then this dense representation could either be avoided for cases of truly-arbitrary Index values, or could be wrapped with a way of statefully assigning indexes to translate sparse keys to dense storage.In other words, one of the following two options:
Allow arbitrary keys by providing a means of binding a HashMap representation or similar (may make SIMD implementation difficult)
Make storage generic enough to allow a definition which automatically manages the index space, given an arbitrary
Eq + Ord + Hash
key-type, while still providing the current method which "cares" which index it is, and assumes the application will manage these values.I believe option 2 will be the best one to implement, as it maximizes functionality with minimal increase in maintenance cost. This assumes that it is possible to bake the trait-level mapping operations during insert/removal/update operations to maintain a dense storage efficiently without much impact to the SIMD and CUDA implementations.
The text was updated successfully, but these errors were encountered: