You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently the chunk producer has to send the state witness V times, where V is the number of chunk validators. Delivery of the state witness is latency sensitive as it is necessary for block production.
In a case with a sizable state witness (say, 8 MB) and 50 chunk validators, it means that the chunk producer needs to urgently send a burst of 400 MB, which overwhelms the local capabilities of the node.
By implementing a distribution mechanism based on splitting the witness into parts and relying on other validators to forward parts, we can spread the work among all of the chunk validators. It is similar to what is done in mainnet today for chunk distribution.
We plan to:
Use Reed-Solomon encoding to break the state witness into V parts, where V is the number of chunk validators
For each part, assign one chunk validator to be responsible for forwarding the part to all other chunk validators
Decode the state witness when enough parts have arrived and proceed as normal
We expect the latency of this 2-hop approach to be acceptable because it works well in production today for chunk distribution via PartialEncodedChunks.
After this change, each node will only need to send an amount of data equal to the size of the state witness, plus a 50% overhead from Reed-Solomon encoding. As an added benefit of the erasure coding, each validator only needs to receive 2/3 of the parts before the state witness can be decoded.
The text was updated successfully, but these errors were encountered:
Currently the chunk producer has to send the state witness
V
times, whereV
is the number of chunk validators. Delivery of the state witness is latency sensitive as it is necessary for block production.In a case with a sizable state witness (say, 8 MB) and 50 chunk validators, it means that the chunk producer needs to urgently send a burst of 400 MB, which overwhelms the local capabilities of the node.
By implementing a distribution mechanism based on splitting the witness into parts and relying on other validators to forward parts, we can spread the work among all of the chunk validators. It is similar to what is done in mainnet today for chunk distribution.
We plan to:
V
parts, whereV
is the number of chunk validatorsWe expect the latency of this 2-hop approach to be acceptable because it works well in production today for chunk distribution via PartialEncodedChunks.
After this change, each node will only need to send an amount of data equal to the size of the state witness, plus a 50% overhead from Reed-Solomon encoding. As an added benefit of the erasure coding, each validator only needs to receive 2/3 of the parts before the state witness can be decoded.
The text was updated successfully, but these errors were encountered: