You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
A light client wishing to verify a Tendermint header must check that the purported header has valid signatures from a 2/3+1 stake weight coalition.
The current implementation (as I understand it) processes signatures in stake-weight order, individually verifying each one until a 2/3+1 threshold is reached.
However, if the verifier had advice about which signatures formed a valid 2/3+1 majority, it could perform verification much more efficiently, by doing a batch verification check. It turns out given a set of signatures, checking "are all signatures in the set valid?" can be computed much more efficiently than individual verification of each signature, which provides more information ("exactly which signatures in the set are valid?").
This performance increase can be substantial, as illustrated in this graph from work I did a few years ago (the ed25519-zebra library is now called ed25519-consensus):
While I haven't re-checked exact numbers, the chart suggests that batch verification of a majority subset could be >50% cheaper, cycle-for-cycle, than individualized verification of each signature.
For a verifier to take advantage of this speedup, however, it needs to be advised on which signatures form a valid 2/3+1 majority. Computing this advice requires individually checking each signature, which would appear to nullify any performance advantage.
However, when running a Tendermint light client in ZK (e.g., in SP1), the advice can be computed cheaply out-of-circuit and supplied to part of the code whose execution is certified by the prover. Since correctness of the verification does not rely on correctness of the advice -- incorrect advice can at worst result in failing to prove that the header was valid -- this allows substantially reducing the in-circuit costs of header verification.
The text was updated successfully, but these errors were encountered:
Batch verification seems to require a randomness source. How might we be able prove this in a zkVM? Maybe it would need some randomness source that's outside the prover's control, such as the result from the fiat-shamir step... 🤔
is there a way to do this without needing to deeply modify the zkVM proving system?
Description
A light client wishing to verify a Tendermint header must check that the purported header has valid signatures from a 2/3+1 stake weight coalition.
The current implementation (as I understand it) processes signatures in stake-weight order, individually verifying each one until a 2/3+1 threshold is reached.
However, if the verifier had advice about which signatures formed a valid 2/3+1 majority, it could perform verification much more efficiently, by doing a batch verification check. It turns out given a set of signatures, checking "are all signatures in the set valid?" can be computed much more efficiently than individual verification of each signature, which provides more information ("exactly which signatures in the set are valid?").
This performance increase can be substantial, as illustrated in this graph from work I did a few years ago (the
ed25519-zebra
library is now calleded25519-consensus
):While I haven't re-checked exact numbers, the chart suggests that batch verification of a majority subset could be >50% cheaper, cycle-for-cycle, than individualized verification of each signature.
For a verifier to take advantage of this speedup, however, it needs to be advised on which signatures form a valid 2/3+1 majority. Computing this advice requires individually checking each signature, which would appear to nullify any performance advantage.
However, when running a Tendermint light client in ZK (e.g., in SP1), the advice can be computed cheaply out-of-circuit and supplied to part of the code whose execution is certified by the prover. Since correctness of the verification does not rely on correctness of the advice -- incorrect advice can at worst result in failing to prove that the header was valid -- this allows substantially reducing the in-circuit costs of header verification.
The text was updated successfully, but these errors were encountered: