-
Notifications
You must be signed in to change notification settings - Fork 16
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Question: does FastAggregateVerify
accept point of infinity PK (i.e., SK=0
)?
#27
Comments
Hi @hwwhww: Thanks for the question---it certainly indicates that the document could be clearer, which is very helpful feedback 👍 That said, there is no inconsistency here. The reason is, per Section 3.3,
PopVerify (Section 3.3.3) calls So any conforming implementation will have called KeyValidate on all PKs supplied to FastAggregateVerify, and therefore identity PKs are not allowed. Does this make sense? Would it be clearer if we reiterated the |
@kwantam Thank you for your reply, it is much more clear now! 👍 I understand that
In applications, it's probably fine since PKs are filtered before calling Let me know if I missed anything or anything is unclear. :) |
Sorry, I think I don't quite understand the concern or the suggested fix. A little more background: PopVerify is a strict precondition for FastAggregateVerify because the security of the signature scheme against rogue key attacks requires verifying a proof of possession for each public key. FastAggregateVerify must not be called on public keys for which a valid proof of possession is not known, because this allows malicious users to trivially falsify aggregate signatures. This means that adding KeyValidate for each PK as a precondition to FastAggregateVerify is not necessary, because it is already a precondition by way of PopVerify; meanwhile, executing KeyValidate inside FastAggregateVerify cannot remove that precondition. In sum: to implement PureFastAggregateVerify, you would need to pass in both public keys and corresponding proofs of possession. If you wanted, you could implement this as a wrapper around FastAggregateVerify for fuzzing purposes. Sorry if I have misunderstood what you are saying! |
@kwantam Sorry I didn’t describe it well. 😅 This is how we implemented the def FastAggregateVerify((PK_1, ..., PK_n), message, signature):
# precondition
If n < 1: return INVALID
# Run procedure
1. aggregate = pubkey_to_point(PK_1)
2. for i in 2, ..., n:
3. next = pubkey_to_point(PK_i)
4. aggregate = aggregate + next
5. PK = point_to_pubkey(aggregate)
6. return CoreVerify(PK, message, signature) This is the pseudo code of what you meant by the def PureFastAggregateVerify((PK_1, ..., PK_n), message, signature, (proof_1, ...., proof_n)):
# precondition 1
If n < 1: return INVALID
# precondition 2
for i in 2, ..., n:
If PopVerify(PK_i, proof_i) is INVALID, return INVALID
return FastAggregateVerify((PK_1, ..., PK_n), message, signature)
def FastAggregateVerify((PK_1, ..., PK_n), message, signature):
# Run procedure
1. aggregate = pubkey_to_point(PK_1)
2. for i in 2, ..., n:
3. next = pubkey_to_point(PK_i)
4. aggregate = aggregate + next
5. PK = point_to_pubkey(aggregate)
6. return CoreVerify(PK, message, signature) If the above logic is correct, when we only look at Thanks for your time again. 🙏 edited: I understand that |
From my perspective---and from the document's perspective---this is a false statement. The document says (or, intends to say!) that This is morally equivalent, for example, to a function that requires the caller to ensure that a pointer supplied as an argument is valid. It doesn't make any sense to try and test such a function on an invalid pointer: of course we would not expect it to work (and, needless to say, it is not possible in the general case---in C, say---for a function to determine for itself whether or not a pointer is valid). We might ask, how can we rewrite the API to avoid this issue? The only answer I can see is to specify something like But: I don't think this is would be a good API, because it seems to prescribe (or at least to prefer) a particular implementation (namely, memoizing PopVerify). Meanwhile, depending on the programming language, there are plenty of other valid ways to enforce the precondition without resorting to the PureFastAggregateVerify API. For example, one way to ensure that the precondition is never violated is for implementations to (1) use different types for public keys with and without proofs of possession, (2) ensure that a public key is only instantiated as the Does all of this make sense? I would of course welcome suggestions for other solutions! |
If you're using a language that supports session types like Rust, then you could create some certificate type for which the verification function returned the public key, and write serialization code only for the certificate type, so users are forced into checking their proofs-of-possession when deserializing.
You cannot make the human readable pseudo-code compiler enforce this, even though you can express the constraint. ;) Also, you're code becomes slow because users do not understand that they must deserialize once and keep the deserialized and validated keys around in memory, but maybe such users should not really be using BLS anyways, so.. |
FYI this is my implementations of Using Miracl as a backend: https://github.com/status-im/nim-blscurve/blob/86d151d7/blscurve/miracl/bls_signature_scheme.nim#L451-L500 func fastAggregateVerify*[T: byte|char](
publicKeys: openarray[PublicKey],
proofs: openarray[ProofOfPossession],
message: openarray[T],
signature: Signature
): bool =
## Verify the aggregate of multiple signatures on the same message
## This function is faster than AggregateVerify
## Compared to the IETF spec API, it is modified to
## enforce proper usage of the proof-of-posession
# 1. aggregate = pubkey_to_point(PK_1)
# 2. for i in 2, ..., n:
# 3. next = pubkey_to_point(PK_i)
# 4. aggregate = aggregate + next
# 5. PK = point_to_pubkey(aggregate)
# 6. return CoreVerify(PK, message, signature)
if publicKeys.len == 0:
return false
if not publicKeys[0].popVerify(proofs[0]):
return false
var aggregate = publicKeys[0]
for i in 1 ..< publicKeys.len:
if not publicKeys[i].popVerify(proofs[i]):
return false
aggregate.point.add(publicKeys[i].point)
return coreVerify(aggregate, message, signature, DST)
func fastAggregateVerify*[T: byte|char](
publicKeys: openarray[PublicKey],
message: openarray[T],
signature: Signature
): bool =
## Verify the aggregate of multiple signatures on the same message
## This function is faster than AggregateVerify
##
## The proof-of-possession MUST be verified before calling this function.
## It is recommended to use the overload that accepts a proof-of-possession
## to enforce correct usage.
# 1. aggregate = pubkey_to_point(PK_1)
# 2. for i in 2, ..., n:
# 3. next = pubkey_to_point(PK_i)
# 4. aggregate = aggregate + next
# 5. PK = point_to_pubkey(aggregate)
# 6. return CoreVerify(PK, message, signature)
if publicKeys.len == 0:
return false
var aggregate = publicKeys[0]
for i in 1 ..< publicKeys.len:
aggregate.point.add(publicKeys[i].point)
return coreVerify(aggregate, message, signature, DST) And with BLST as a backend: https://github.com/status-im/nim-blscurve/blob/86d151d7/blscurve/blst/bls_sig_min_pubkey_size_pop.nim#L628-L687 func fastAggregateVerify*[T: byte|char](
publicKeys: openarray[PublicKey],
proofs: openarray[ProofOfPossession],
message: openarray[T],
signature: Signature
): bool =
## Verify the aggregate of multiple signatures on the same message
## This function is faster than AggregateVerify
## Compared to the IETF spec API, it is modified to
## enforce proper usage of the proof-of-posession
# 1. aggregate = pubkey_to_point(PK_1)
# 2. for i in 2, ..., n:
# 3. next = pubkey_to_point(PK_i)
# 4. aggregate = aggregate + next
# 5. PK = point_to_pubkey(aggregate)
# 6. return CoreVerify(PK, message, signature)
if publicKeys.len == 0:
return false
if not publicKeys[0].popVerify(proofs[0]):
return false
var aggregate {.noInit.}: blst_p1
aggregate.blst_p1_from_affine(publicKeys[0].point)
for i in 1 ..< publicKeys.len:
if not publicKeys[i].popVerify(proofs[i]):
return false
# We assume that the PublicKey is in on curve, in the proper subgroup
aggregate.blst_p1_add_or_double_affine(publicKeys[i].point)
var aggAffine{.noInit.}: PublicKey
aggAffine.point.blst_p1_to_affine(aggregate)
return coreVerify(aggAffine, message, signature, DST)
func fastAggregateVerify*[T: byte|char](
publicKeys: openarray[PublicKey],
message: openarray[T],
signature: Signature
): bool =
## Verify the aggregate of multiple signatures on the same message
## This function is faster than AggregateVerify
##
## The proof-of-possession MUST be verified before calling this function.
## It is recommended to use the overload that accepts a proof-of-possession
## to enforce correct usage.
# 1. aggregate = pubkey_to_point(PK_1)
# 2. for i in 2, ..., n:
# 3. next = pubkey_to_point(PK_i)
# 4. aggregate = aggregate + next
# 5. PK = point_to_pubkey(aggregate)
# 6. return CoreVerify(PK, message, signature)
if publicKeys.len == 0:
return false
var aggregate {.noInit.}: blst_p1
aggregate.blst_p1_from_affine(publicKeys[0].point)
for i in 1 ..< publicKeys.len:
# We assume that the PublicKey is in on curve, in the proper subgroup
aggregate.blst_p1_add_or_double_affine(aggregate, publicKeys[i].point)
var aggAffine{.noInit.}: PublicKey
aggAffine.point.blst_p1_to_affine(aggregate)
return coreVerify(aggAffine, message, signature, DST) |
@kwantam Thanks for the clarification! And hello @burdges 👋 I should have given more context of why I was asking this question originally:
IMHO adding Question: As for the implementation, does it make sense if we provide test vectors with the non-
^^^ The For our Python implementation (py_ecc, an experimental ECC lib), we may add a |
My opinion is that it does not make sense to test FastAggregateVerify on inputs that violate its preconditions, because by definition any behavior is conforming for such inputs. But of course, since any behavior is conforming, y'all are free to force implementations to use a particular one if you want. |
And, to be clear: there will be no (I'm of course happy to discuss other proposed changes! Not trying to close the door on the conversation. But neither changing the API nor attempting to enforce some kind of typing discipline in pseudocode seem reasonable.) |
(Thanks to @mratsim for pointing out this.)
Hi @kwantam and co.
KeyValidate
. Therefore, it's also disallowed inCoreVerify
,CoreAggregateVerify
,AggregateVerify
, andPopVerify
.FastAggregateVerify
does not useKeyValidate
to check thePKs
before aggregating thePKs
. So having a point at infinityPK
is valid.FastAggregateVerify
doesn't checkpubkey_subgroup_check
before callingpubkey_to_point
. I'm not sure if it's required for the formal spec. (I suppose implementations returnFalse
whenpubkey_to_point
raises exceptions anyway?)Questions:
AggregatePKs
, or more genericAggregateG1
APIs to deal with aggregation insideFastAggregateVerify
. So addingKeyValidate
toFastAggregateVerify
may increase more overhead than what it looks like in the IETF document. It would be nice if it can be figured out with minimum changes.Thanks for your time. :)
The text was updated successfully, but these errors were encountered: