-
Notifications
You must be signed in to change notification settings - Fork 463
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Halve commit size and improve verification speed with partial Ed25519 signature aggregation #1305
Comments
Thank you for porting this issue to our new repository. I don't have the cryptographic background in order to fully understand and validate this approach for signature aggregation. But, when considering the approach itself, the main points I think we should consider are:
The questions above were crucial, for instance, to define that at the current state of the art we should not adopt BLS aggregated signatures. Not only for requiring new keys and not being backwards-compatible, but also because producing ordinary (single, non aggregated) signatures with this kind of keys is at least 10 times slower than when adopting Ed25519. |
Yes, this involves no changes to validators, their signing keys, or how signatures are produced (i.e. HSM-friendly)
It would be most easily implemented as an optimization technique applied to new blocks, leaving old blocks as-is. There is perhaps some possibility of applying it to old blocks to reduce the size of archival nodes, but I would expect that to be much harder to implement and I'm not sure it's even possible offhand (e.g. would it change block IDs?)
That text was carried over from tendermint/tendermint#7892. To be clear it reduces the size of the consensus signatures in half. Offhand I don't have statistics as to how much of the overall block size signatures make up on average.
For the linear combination part of it, on my M1 Max a scalar addition takes 18ns. Multiply that times, say 125 signatures and that's 2.25 microseconds. And there are more efficient algorithms for computing linear combinations. That's not the complete story, but suffice it to say the aggregation is significantly faster than Ed25519 signing/verifying. It's also a small one-time cost on the part of the proposer which makes signature verification faster overall, where the verification win impacts every block verified by every full node (though the performance win there would come largely from batch verification, which could be implemented without this technique). |
Protocol Change Proposal
Summary
CometBFT uses Ed25519 signatures created by validators for consensus.
If a normal Ed25519 signature is
(R, s)
where each component is a 32-byte serialized field element (i.e. base/scalar field respectively), it's possible to aggregate thes
components of the signature as a linear combination, which can be verified by an efficient batch verification algorithm.This effectively cuts the size of consensus signatures stored in blocks in half, in a way that can be deterministically computed from the original set of Ed25519 signatures, and requires no changes to signers (i.e. HSM-friendly).
Problem Definition / Proposal
This was originally proposed by @ValarDragon as tendermint/tendermint#7892 which contains a detailed description, but was auto-closed as stale.
I mainly opened this issue because it seems like a highly impactful "drop-in" optimization for how CometBFT stores consensus signatures which can significantly reduce both block size and verification time, and I didn't want to see it lost in the shuffle.
If it's helpful I can copy the relevant text into this issue.
Edit: I should note there's a paper describing the scheme here, complete with security proof: https://eprint.iacr.org/2021/350.pdf
The text was updated successfully, but these errors were encountered: