-
Notifications
You must be signed in to change notification settings - Fork 24
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Proposal: BLS serialization tests #24
Comments
@hwwhww For testing compressing/decompressing of G1/G2 points, the Go API in On 1.2 , BLST has removed subgroup checks when decompressing points as of v3 , this was just merged into Prysm here. |
The group check is an expensive operation, and it's argued that application is entitled for a choice when to perform it. Primarily because there are situations when it's possible to perform multiple checks in parallel. In other words blst deserialization/uncompression subroutines don't perform group checks, and it's application's responsibility to either make explicit in-group-check call right after deserialization, or pass perform-group-checks-as-you-go flags to higher level procedures and utilize parallelism whenever possible. Speaking of serialization format in more general terms. As data is deserialized and converted into internal representation format, it implicitly gets modulo reduced. In other words deserialization procedure can handle non-reduced input seamlessly. But it's only natural to assume that if input is not reduced, then somebody is trying to mess with you. For this reason first thing the [blst] deserialaztion procedure does is to ensure that input is fully reduced. However, this is not actually specified, and a formal concern can be and even was raised. In other words, while we are at it, it would be only appropriate to explicitly specify that input is expected to be fully reduced and provide corresponding test vector. |
1.1 The C API that we use has support for the decompression/uncompression from Zcash |
These APIs are all available in Apache Milagro Rust and could trivially be added to With respect to the subgroup checking in lighthouse we do that for PublicKeys at serialization time and Signatures at verification time. So it may not be on by default for lighthouse but can easily be added to tests. So happy eitherway if we are to include or not include the subgroup checks. |
From a fuzzing perspective, the existing BLS work (available here) included the 3 MSBs but was only aimed at the compressed version (i.e. MSB0 = 1) |
Here is a case from November 2019 where there was an infinity bit set and we didn't check properly that the rest of the bits were 0: status-im/nimbus-eth2#555 This was before we skipped the Ethereum signature for testing so it's from one of the old test vectors. |
Close? https://github.com/ethereum/bls12-381-tests includes decompression tests (cc @asanso). Compression can be done by clients via round-trip tests to confirm internal consistency. |
BLS serialization tests
Background
Proposed new test suite
The input and output of our BLS APIs are all in minimal-pubkey-size form (compressed 48-byte pubkey and 96-byte signature). So the functions to test would be:
Discussions
1. Are the APIs available?
2. Did the fuzzing already cover the 3-MSBs edge cases of BLS tests?
/cc @zedt3ster
3. Do you think it would help to reduce the consensus error risks?
/cc @JustinDrake @CarlBeek
The text was updated successfully, but these errors were encountered: