-
Notifications
You must be signed in to change notification settings - Fork 1.6k
av-store: granular pruning of data #7237
Comments
Do you know what takes the time? The actual IO when applying the TX? Or the processing of what to delete? If it is the later, we should just move this to some background process. |
I am not sure which part exactly is heavy. Before moving this as a background task we'd have to be certain that we don't corrupt up the db with multiple writers/readers on same column. |
If the db is Sync & Send, I would assume that you can read and write from multiple threads :P |
Yes it is, but usually we do this from a single subsystem thread. Maybe in this case, it shouldn't be an issue as the keys should no longer be accessed by anything since this will happen after 25h hours. |
There are situations where pruning of the data could take more than a few seconds and that might make the whole subsystem unreponsive. To avoid this just move the prune process on a separate thread. See: #7237, for more details. Signed-off-by: Alexandru Gheorghe <[email protected]>
There are situations where pruning of the data could take more than a few seconds and that might make the whole subsystem unreponsive. To avoid this just move the prune process on a separate thread. See: paritytech#7237, for more details. Signed-off-by: Alexandru Gheorghe <[email protected]>
There are situations where pruning of the data could take more than a few seconds and that might make the whole subsystem unreponsive. To avoid this just move the prune process on a separate thread. See: paritytech#7237, for more details. Signed-off-by: Alexandru Gheorghe <[email protected]>
There are situations where pruning of the data could take more than a few seconds and that might make the whole subsystem unreponsive. To avoid this just move the prune process on a separate thread. See: paritytech#7237, for more details. Signed-off-by: Alexandru Gheorghe <[email protected]>
Moving prune_all to a separate blocking task: #7263 |
There are situations where pruning of the data could take more than a few seconds and that might make the whole subsystem unreponsive. To avoid this just move the prune process on a separate thread. See: paritytech#7237, for more details. Signed-off-by: Alexandru Gheorghe <[email protected]>
There are situations where pruning of the data could take more than a few seconds and that might make the whole subsystem unreponsive. To avoid this just move the prune process on a separate thread. See: paritytech#7237, for more details. Signed-off-by: Alexandru Gheorghe <[email protected]>
* av-store: Move prune on a separate thread There are situations where pruning of the data could take more than a few seconds and that might make the whole subsystem unreponsive. To avoid this just move the prune process on a separate thread. See: #7237, for more details. Signed-off-by: Alexandru Gheorghe <[email protected]> * av-store: Add log that prunning started Signed-off-by: Alexandru Gheorghe <[email protected]> * av-store: modify log severity Signed-off-by: Alexandru Gheorghe <[email protected]> --------- Signed-off-by: Alexandru Gheorghe <[email protected]>
Fixed with: #7263 |
Currently we block the subsystem for whatever times it takes to prune the data both on timer and on finality. We should make this process more granular as to allow the processing of messages in between otherwise we can end up in a situation where this takes > 10s and node crashes due to
SubsystemStalled
error.polkadot/node/core/av-store/src/lib.rs
Line 617 in 19fdd19
In the past I have seen very high times up to 10s on a few nodes, but not as of recently on tests with small PoVs.
The text was updated successfully, but these errors were encountered: