You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Our current implementation introduces 2x latency of chunk execution. This is because validation of chunk (N+1) is blocked on execution of chunk N, and this validation repeats execution of chunk N itself.
It can be avoided by optimistic execution:
Once CP produces chunk N, it can go ahead and distribute it to CP for chunk (N+1). Because they have state, they can execute it before inclusion into block and record resulting state proof.
This requires couple annoying runtime changes. For example, currently chunk execution protocol uses block hash during execution; with optimistic execution it can use only prev block hash.
Once BP includes chunk N into block, CP for (N+1) will already have state proof for N ready, can immediately produce chunk (N+1) and send state witness to CVs.
It is a well-defined win after stateless validation release.
For now, I believe we don't need to change config delays because stateless validation improves performance on its own.
Our current implementation introduces 2x latency of chunk execution. This is because validation of chunk (N+1) is blocked on execution of chunk N, and this validation repeats execution of chunk N itself.
It can be avoided by optimistic execution:
It is a well-defined win after stateless validation release.
For now, I believe we don't need to change config delays because stateless validation improves performance on its own.
Original context: https://docs.google.com/document/d/1k0NRMcLsDZp6C9pCRjNu5l7irDyRHsZ3VtKAKno_tFY/edit#heading=h.7ae0b4dh7648
Another pic of current workflow I came up with trying to understand this:
The text was updated successfully, but these errors were encountered: