Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ECS Change Events #54

Closed
cart opened this issue Jul 12, 2020 · 6 comments
Closed

ECS Change Events #54

cart opened this issue Jul 12, 2020 · 6 comments
Labels
A-ECS Entities, components, systems, and events

Comments

@cart
Copy link
Member

cart commented Jul 12, 2020

The ability to query added/removed/changed components would enable a whole class of optimizations (ex: only updating Transforms when a Translation/Rotation/Scale component changes). It would also make a number of ui patterns easier (only run this when UiComponent state is modified).

@cart cart added the A-ECS Entities, components, systems, and events label Jul 12, 2020
@cart
Copy link
Member Author

cart commented Jul 19, 2020

Component modification tracking was added here: 31d00ad

Added + removed events would still be useful for some things, but Changed<T> provides most of the utility we needed from this feature.

@cart
Copy link
Member Author

cart commented Jul 22, 2020

Just added Added<T> queries here: a695304

Removed queries need to be a little different because archetypes no longer store the removed entities. I think users will want an api that builds on top of the existing query system, so i'm envisioning something like this:

fn system(query: Query<(&A, &mut B)>) {
  // returns entities that (1) matched the query and (2) removed B last update
  for entity in query.removed::<A>() {
  }
}

or this:

fn system(query: Query<(&A, &mut B)>) {
  // returns entities that matched the query and removed A and B last update 
  for entity in query.removed() {
  }
}

@cart
Copy link
Member Author

cart commented Jul 22, 2020

There is also an issue with how we handle all change events:

schedule.run()
  system_1: Query<Changed<T>>
  system_2: Query<&mut T>, modifies T
  system_3: Query<Changed<T>>
  world.clear_trackers()

system_1 wants to run whenever T changes, but it will not run when system_2 modifies T because world.clear_trackers() runs at the end of the schedule.
system_3 receives the change state because it comes after system_2 and before world.clear_trackers()

I can think of two solutions to this problem:

  • leave it as is. if you want to consume change events, add the system after the modifying systems.
  • double buffer modified events, so each system will always have a chance to receive the change (much like how Events currently works). this case is a bit more complicated because we dont have EventReaders here.

@cart
Copy link
Member Author

cart commented Jul 22, 2020

i gave "system order-independent tracking" a little more thought:

ideas

  • double buffering
    • Lots of state (entities moving archetypes / getting removed) that makes this both complex and potentially expensive. for an old modified buffer [false true false] and a new modified buffer [true false false], buffer[0] could correspond to different entities
      • would probably need to keep two lists of entities (last frame / current frame) and do a hash lookup to see if the "old" buffer's entities are still valid
  • Stateful queries w/ modified generations
    • no world.clear(), all tracking queries are stateful
    • each query maintains a last_seen_generation: HashMap<Entity, ModifiedGeneration>,
    • Mut pointers either directly increment the generation or set a bool like they do today (which then after system execution results in an increment)
    • if we track both "modified_generation" and "modified_this_update", we could keep the current "cheap but order dependent" tracking and add "expensive but order independent" tracking, at the cost of some additional bookkeeping overhead. (we would need to write to two places instead of one, and archetype moves / inserts would need to copy 2x the amount of change metadata)
  • for systems with a Changed query, identify systems with &mut T after and track changes that occur
    • instead of Mut { modified: &mut bool }, we would have Mut { modified: &mut [bool] }
    • potential to be very expensive
  • no automatic clears
    • puts the burden of clearing on the user. could result in bad cross system interactions (what if two systems clear ... what happens when someone else's library clears something you need). if clears were per-system, this would be reasonable, but if we can make them per system we could just do double buffering (i think its the same problem)

in general i consider this to be a hard problem that doesnt need solving right now. the optimizations we care about right now should all play nicely with the current setup. and users can design their code around this. i cant think of a way to make order independence work without adding some level of statefulness somewhere (which translates to adding overhead).

I'll open an new issue for order independent change tracking, but i think that solution should hinge on:

  1. allowing the current "cheap" form of tracking
  2. now slowing down normal non-tracked system

@cart
Copy link
Member Author

cart commented Jul 22, 2020

The last part of this issue (now that we've tabled order-independence) is component removal tracking.

Rather than storing component removals in archetypes and adding removal queries (as suggested above), I think it might be easier (both to implement and understand) to just store component removals directly in world. Removals could then be iterated using: world.iter_removed::<T>(). bevy_ecs Queries could mirror that interface, which would be safe because removals cannot happen without exclusive world access.

world.clear_trackers() would still clear removals

@cart
Copy link
Member Author

cart commented Jul 23, 2020

f82af10 adds component removal tracking, which means we can finally close this!

@cart cart closed this as completed Jul 23, 2020
bors bot pushed a commit that referenced this issue Mar 19, 2021
# Problem Definition

The current change tracking (via flags for both components and resources) fails to detect changes made by systems that are scheduled to run earlier in the frame than they are.

This issue is discussed at length in [#68](#68) and [#54](#54).

This is very much a draft PR, and contributions are welcome and needed.

# Criteria
1. Each change is detected at least once, no matter the ordering.
2. Each change is detected at most once, no matter the ordering.
3. Changes should be detected the same frame that they are made.
4. Competitive ergonomics. Ideally does not require opting-in.
5. Low CPU overhead of computation.
6. Memory efficient. This must not increase over time, except where the number of entities / resources does.
7. Changes should not be lost for systems that don't run.
8. A frame needs to act as a pure function. Given the same set of entities / components it needs to produce the same end state without side-effects.

**Exact** change-tracking proposals satisfy criteria 1 and 2.
**Conservative** change-tracking proposals satisfy criteria 1 but not 2.
**Flaky** change tracking proposals satisfy criteria 2 but not 1.

# Code Base Navigation

There are three types of flags: 
- `Added`: A piece of data was added to an entity / `Resources`.
- `Mutated`: A piece of data was able to be modified, because its `DerefMut` was accessed
- `Changed`: The bitwise OR of `Added` and `Changed`

The special behavior of `ChangedRes`, with respect to the scheduler is being removed in [#1313](#1313) and does not need to be reproduced.

`ChangedRes` and friends can be found in "bevy_ecs/core/resources/resource_query.rs".

The `Flags` trait for Components can be found in "bevy_ecs/core/query.rs".

`ComponentFlags` are stored in "bevy_ecs/core/archetypes.rs", defined on line 446.

# Proposals

**Proposal 5 was selected for implementation.**

## Proposal 0: No Change Detection

The baseline, where computations are performed on everything regardless of whether it changed.

**Type:** Conservative

**Pros:**
- already implemented
- will never miss events
- no overhead

**Cons:**
- tons of repeated work
- doesn't allow users to avoid repeating work (or monitoring for other changes)

## Proposal 1: Earlier-This-Tick Change Detection

The current approach as of Bevy 0.4. Flags are set, and then flushed at the end of each frame.

**Type:** Flaky

**Pros:**
- already implemented
- simple to understand
- low memory overhead (2 bits per component)
- low time overhead (clear every flag once per frame)

**Cons:**
- misses systems based on ordering
- systems that don't run every frame miss changes
- duplicates detection when looping
- can lead to unresolvable circular dependencies

## Proposal 2: Two-Tick Change Detection

Flags persist for two frames, using a double-buffer system identical to that used in events.

A change is observed if it is found in either the current frame's list of changes or the previous frame's.

**Type:** Conservative

**Pros:**
- easy to understand
- easy to implement
- low memory overhead (4 bits per component)
- low time overhead (bit mask and shift every flag once per frame)

**Cons:**
- can result in a great deal of duplicated work
- systems that don't run every frame miss changes
- duplicates detection when looping

## Proposal 3: Last-Tick Change Detection

Flags persist for two frames, using a double-buffer system identical to that used in events.

A change is observed if it is found in the previous frame's list of changes.

**Type:** Exact

**Pros:**
- exact
- easy to understand
- easy to implement
- low memory overhead (4 bits per component)
- low time overhead (bit mask and shift every flag once per frame)

**Cons:**
- change detection is always delayed, possibly causing painful chained delays
- systems that don't run every frame miss changes
- duplicates detection when looping

## Proposal 4: Flag-Doubling Change Detection

Combine Proposal 2 and Proposal 3. Differentiate between `JustChanged` (current behavior) and `Changed` (Proposal 3).

Pack this data into the flags according to [this implementation proposal](#68 (comment)).

**Type:** Flaky + Exact

**Pros:**
- allows users to acc
- easy to implement
- low memory overhead (4 bits per component)
- low time overhead (bit mask and shift every flag once per frame)

**Cons:**
- users must specify the type of change detection required
- still quite fragile to system ordering effects when using the flaky `JustChanged` form
- cannot get immediate + exact results
- systems that don't run every frame miss changes
- duplicates detection when looping

## [SELECTED] Proposal 5: Generation-Counter Change Detection

A global counter is increased after each system is run. Each component saves the time of last mutation, and each system saves the time of last execution. Mutation is detected when the component's counter is greater than the system's counter. Discussed [here](#68 (comment)). How to handle addition detection is unsolved; the current proposal is to use the highest bit of the counter as in proposal 1.

**Type:** Exact (for mutations), flaky (for additions)

**Pros:**
- low time overhead (set component counter on access, set system counter after execution)
- robust to systems that don't run every frame
- robust to systems that loop

**Cons:**
- moderately complex implementation
- must be modified as systems are inserted dynamically
- medium memory overhead (4 bytes per component + system)
- unsolved addition detection

## Proposal 6: System-Data Change Detection

For each system, track which system's changes it has seen. This approach is only worth fully designing and implementing if Proposal 5 fails in some way.  

**Type:** Exact

**Pros:**
- exact
- conceptually simple

**Cons:**
- requires storing data on each system
- implementation is complex
- must be modified as systems are inserted dynamically

## Proposal 7: Total-Order Change Detection

Discussed [here](#68 (comment)). This proposal is somewhat complicated by the new scheduler, but I believe it should still be conceptually feasible. This approach is only worth fully designing and implementing if Proposal 5 fails in some way.  

**Type:** Exact

**Pros:**
- exact
- efficient data storage relative to other exact proposals

**Cons:**
- requires access to the scheduler
- complex implementation and difficulty grokking
- must be modified as systems are inserted dynamically

# Tests

- We will need to verify properties 1, 2, 3, 7 and 8. Priority: 1 > 2 = 3 > 8 > 7
- Ideally we can use identical user-facing syntax for all proposals, allowing us to re-use the same syntax for each.
- When writing tests, we need to carefully specify order using explicit dependencies.
- These tests will need to be duplicated for both components and resources.
- We need to be sure to handle cases where ambiguous system orders exist.

`changing_system` is always the system that makes the changes, and `detecting_system` always detects the changes.

The component / resource changed will be simple boolean wrapper structs.

## Basic Added / Mutated / Changed

2 x 3 design:
- Resources vs. Components
- Added vs. Changed vs. Mutated
- `changing_system` runs before `detecting_system`
- verify at the end of tick 2

## At Least Once

2 x 3 design:
- Resources vs. Components
- Added vs. Changed vs. Mutated
- `changing_system` runs after `detecting_system`
- verify at the end of tick 2

## At Most Once

2 x 3 design:
- Resources vs. Components
- Added vs. Changed vs. Mutated
- `changing_system` runs once before `detecting_system`
- increment a counter based on the number of changes detected
- verify at the end of tick 2

## Fast Detection
2 x 3 design:
- Resources vs. Components
- Added vs. Changed vs. Mutated
- `changing_system` runs before `detecting_system`
- verify at the end of tick 1

## Ambiguous System Ordering Robustness
2 x 3 x 2 design:
- Resources vs. Components
- Added vs. Changed vs. Mutated
- `changing_system` runs [before/after] `detecting_system` in tick 1
- `changing_system` runs [after/before] `detecting_system` in tick 2

## System Pausing
2 x 3 design:
- Resources vs. Components
- Added vs. Changed vs. Mutated
- `changing_system` runs in tick 1, then is disabled by run criteria
- `detecting_system` is disabled by run criteria until it is run once during tick 3
- verify at the end of tick 3

## Addition Causes Mutation

2 design:
- Resources vs. Components
- `adding_system_1` adds a component / resource
- `adding system_2` adds the same component / resource
- verify the `Mutated` flag at the end of the tick
- verify the `Added` flag at the end of the tick

First check tests for: #333
Second check tests for: #1443

## Changes Made By Commands

- `adding_system` runs in Update in tick 1, and sends a command to add a component 
- `detecting_system` runs in Update in tick 1 and 2, after `adding_system`
- We can't detect the changes in tick 1, since they haven't been processed yet
- If we were to track these changes as being emitted by `adding_system`, we can't detect the changes in tick 2 either, since `detecting_system` has already run once after `adding_system` :( 

# Benchmarks

See: [general advice](https://github.com/bevyengine/bevy/blob/master/docs/profiling.md), [Criterion crate](https://github.com/bheisler/criterion.rs)

There are several critical parameters to vary: 
1. entity count (1 to 10^9)
2. fraction of entities that are changed (0% to 100%)
3. cost to perform work on changed entities, i.e. workload (1 ns to 1s)

1 and 2 should be varied between benchmark runs. 3 can be added on computationally.

We want to measure:
- memory cost
- run time

We should collect these measurements across several frames (100?) to reduce bootup effects and accurately measure the mean, variance and drift.

Entity-component change detection is much more important to benchmark than resource change detection, due to the orders of magnitude higher number of pieces of data.

No change detection at all should be included in benchmarks as a second control for cases where missing changes is unacceptable.

## Graphs
1. y: performance, x: log_10(entity count), color: proposal, facet: performance metric. Set cost to perform work to 0. 
2. y: run time, x: cost to perform work, color: proposal, facet: fraction changed. Set number of entities to 10^6
3. y: memory, x: frames, color: proposal

# Conclusions
1. Is the theoretical categorization of the proposals correct according to our tests?
2. How does the performance of the proposals compare without any load?
3. How does the performance of the proposals compare with realistic loads?
4. At what workload does more exact change tracking become worth the (presumably) higher overhead?
5. When does adding change-detection to save on work become worthwhile?
6. Is there enough divergence in performance between the best solutions in each class to ship more than one change-tracking solution?

# Implementation Plan

1. Write a test suite.
2. Verify that tests fail for existing approach.
3. Write a benchmark suite.
4. Get performance numbers for existing approach.
5. Implement, test and benchmark various solutions using a Git branch per proposal.
6. Create a draft PR with all solutions and present results to team.
7. Select a solution and replace existing change detection.

Co-authored-by: Brice DAVIER <[email protected]>
Co-authored-by: Carter Anderson <[email protected]>
pcwalton pushed a commit to pcwalton/bevy that referenced this issue Aug 30, 2021
Update to the latest version of Rapier.
CAD97 pushed a commit to CAD97/bevy that referenced this issue Nov 17, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
A-ECS Entities, components, systems, and events
Projects
None yet
Development

No branches or pull requests

1 participant