You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Right now we have lot's of subsystems which operate on the same level having an open exposed interface, despite being tightly connected to another subsystem, making it harder than necessary to reason about the system and also likely leaves performance on the table. This also leads to publicly exposed types, which can not offer encapsulation, because they need to be exposed globally.
Instead we should aim for the code to reflect actual relationships better, which would likely also resolve quite a few cycles. E.g. if a service is only needed from one subsystem it should not be a subsystem, but rather a service spawned by that subsystem, completely hiding it from the top-level. A good example of this is combining approval distribution with approval voting. Other subsystems like the runtime-api which are used from many subsystems would likely be better off being a service that gets passed into subsystems, offering a local LRU cache on top of a "global" one in the service, similarly to what we do now with "RuntimeInfo". We would then have a simple function based API to retrieve runtime information, smart caching (e.g. by SessionIndex) already included.
More details to come, I would definitely start with some simplifications in the course of #616 .
Some more concrete WIP ideas following:
Simplify Backing Pipeline
PoV distribution should be tasks spawned by backing directly.
Statement distribution should be a service spawned by backing
Collation generation and collator protocol should be merged, also probably to be owned by backing.
Availability distribution - sending side does not need to be its own subsystem for sure. Good architecture yet to be defined. We should also keep future optimizations in mind, like asynchronous availability.
Moving runtime-api into a service would indeed simplify the code but the caching would then be local to the subsystem that instances it. We'd end up with same data being cached multiple times and behind the scene we'd also spawn the runtime more often.
Right now we have lot's of subsystems which operate on the same level having an open exposed interface, despite being tightly connected to another subsystem, making it harder than necessary to reason about the system and also likely leaves performance on the table. This also leads to publicly exposed types, which can not offer encapsulation, because they need to be exposed globally.
Instead we should aim for the code to reflect actual relationships better, which would likely also resolve quite a few cycles. E.g. if a service is only needed from one subsystem it should not be a subsystem, but rather a service spawned by that subsystem, completely hiding it from the top-level. A good example of this is combining approval distribution with approval voting. Other subsystems like the runtime-api which are used from many subsystems would likely be better off being a service that gets passed into subsystems, offering a local LRU cache on top of a "global" one in the service, similarly to what we do now with "RuntimeInfo". We would then have a simple function based API to retrieve runtime information, smart caching (e.g. by SessionIndex) already included.
More details to come, I would definitely start with some simplifications in the course of #616 .
Some more concrete WIP ideas following:
Simplify Backing Pipeline
Simplify Approvals
#1617
Runtime API as a service
Simple function based API, doing all the caching under the hood. Unifies and simplifies runtime API access through the whole stack.
Streamline Availability Recovery and Candidate Validation
We could already spawn the validation, while we are still doing the re-encoding - then waiting for both to succeed to consider the candidate valid.
The text was updated successfully, but these errors were encountered: