-
-
Notifications
You must be signed in to change notification settings - Fork 14
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Command Batching for Insert
and Remove
.
#5
Comments
Closes #41 This PR removes the `'static` bound from the `Event` trait. `World::send` can now send events containing borrowed data from outside the `World`. However, `Sender::send` still requires `'static`. ## Additional Changes - Removed `World::send_many`. I wasn't sure how to make the lifetimes correct here, and the method doesn't have any real benefits until #4 or #5 is implemented. It could be added again if needed. - Added `Event::This<'a>`. Because `Event` is no longer `'static`, we need this associated type in order to get a canonical `TypeId`. It also resolves a tricky lifetime issue in the implementation of `HandlerParam` for `Receiver`/`ReceiverMut`. This does mean that `Event` is now an `unsafe` trait, but it can continue to be implemented safely with the derive macro.
Can you re-iterate what your thoughts on batching for non-insert/remove events? I had to add an event called so that I could iterate in parallel. I am thinking event batching and being able to iterate over all might fix this issue to some extent. Alternatively, do you have any other opinions? |
@andrewgazelka If you're asking for the ability to execute a set of handlers in parallel given a list of events then that's not really possible for a few reasons:
So I think the best approach is to store your events in a collection and process them in parallel as you are currently doing. |
Hmm this is really disappointing to hear. I feel like this will make consuming the ECS I am laying out in Hyperion a lot more complicated.
makes sense |
Another more glaring issue is that you wouldn't be allowed to access any data mutably through those handlers because a handler could be running in parallel with itself. The accessed world data would need atomics and/or locks. |
My thought was rather you could have some type of |
Maybe that could work to some extent but then what happens after? Would serial handlers be able to handle events originating from parallel handlers? This also introduces nondeterminism into the control flow. You would also need some notion of sync points to do structural world changes since that can't be done in parallel. |
hmmm I am not sure exactly how it would work. I think it is important to think about how events should work though because I am at the point where I am considering making every event a bundled event. Perhaps this is the best strategy now. |
wait would it be possible to have a |
A multi receiver might be a nice convenience but it wouldn't help performance. This issue is only concerned with eliminating the O(N^2) behavior of repeated |
Whenever a component is added or removed from an entity, all of its components must move to a different archetype. This quickly becomes a performance issue when large numbers of components are added/removed in sequence, such as during entity initialization. To create an entity with N components, we must do N * (N - 1) / 2 needless component moves to add everything. Yikes!
bevy_ecs
and other libraries address this problem with bundles. Bundles let us insert/remove sets of components at a time, removing all intermediate steps.However, I dislike bundles for a few reasons.
evenio
's events.Rather than adding ad-hoc features to the ECS, what if we optimize the features we already have? This is where batching comes in. Whenever an
Insert
orRemove
event finishes broadcasting, we add it to a buffer instead of applying it immediately. Once a handler that could potentially observe the changes is about to run, we flush the buffer. This lets us turnO(N^2)
entity initialization into a roughlyO(N)
operation.To implement this, every
SystemList
contains the union of all components accessed by the systems in the list as aBitSet<ComponentIdx>
. We also have anotherBitSet<ComponentIdx>
associated with the buffered events to track which components have been changed. Before we run through aSystemList
, we check if the system list bitset overlaps with the buffer bitset. If it does, then the buffer needs to be flushed. Flushing the buffer involves sorting by component ID, a deduplication pass, and finally moving the entity to the destination archetype.If an entity has a component added or removed, the
SystemList
associated with it may change. The batching process will have to traverse the archetype graph, tracking where the entity would be if there was no batching involved. For this reason, it only seems feasible to batch one entity at a time.The text was updated successfully, but these errors were encountered: