Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[SEC] Observations on Point Validation #1715

Closed
eschorn1 opened this issue Sep 21, 2020 · 1 comment · Fixed by #3489
Closed

[SEC] Observations on Point Validation #1715

eschorn1 opened this issue Sep 21, 2020 · 1 comment · Fixed by #3489
Assignees
Labels
difficulty:high Difficulty of exploitation enhancement New feature or request nbc-audit-2020-2 🛂 severity:low Overall issue severity status:reported

Comments

@eschorn1
Copy link

Description

This is a strictly informational observation intended to generate thought on the overall point validation strategy.

The BLS Signature specification places a high importance on point validation as evidenced by the following excerpt:

Section 5.1. Validating public keys: All algorithms in Section 2 and Section 3 that operate on public keys require first validating those keys. ...

Section 5.2. Skipping membership check: Some existing implementations skip the signature_subgroup_check invocation in CoreVerify (Section 2.7), whose purpose is ensuring that the signature is an element of a prime-order subgroup. This check is REQUIRED of conforming implementations, for two reasons.

  1. For most pairing-friendly elliptic curves used in practice, the pairing operation e (Section 1.3) is undefined when its input points are not in the prime-order subgroups of E1 and E2. The resulting behavior is unpredictable, and may enable forgeries.
  2. Even if the pairing operation behaves properly on inputs that are outside the correct subgroups, skipping the subgroup check breaks the strong unforgeability property.

The BLS Signature functionality implemented in nim-blscurve does in fact perform the validations described above. However, the parsePubkey() function in beacon_chain/validator_api.nim shown below simply deserializes a hex string and passes it along to downstream logic without further (subgroup) validation.

https://github.com/status-im/nim-beacon-chain/blob/4b38619c4f51847fc591e85327ed226ad46d24ac/beacon_chain/validator_api.nim#L36-L40

The above case itself is not a significant security issue, as the result is to be returned in responses for the get_v1_beacon_states_stateId_validators and get_v1_beacon_states_stateId_validators_validatorId RPC endpoints which should be validated by the request originator prior to usage. Arguably, point validation at usage and ingestion is far more important than an output path.

However, it is noted that the BLS signature functionality specification assumes data originates and terminates as byte strings (e.g., the signature_to_point() function is step 1 in the CoreVerify, Aggregate, and CoreAggregateVerify algorithms, while point_to_signature() is essentially the last step in the CoreSign, Aggregate, PopProve algorithms). In contrast, the nim-crypto and nim-beacon-chain implementation tends to deserialize points once, store them internally in point form, and pass these points to the individual algorithms.

While there are a variety of places and paths in beacon chain to deserialize points, they all go through the lowest-level byte-oriented function (i.e., the fromBytes() function shown below). This function serves as a 'choke point' for G2 (E2) and there is a nearby sibling for G1 (E1). They currently do not perform subgroup validation.

https://github.com/status-im/nim-blscurve/blob/da9ae49dab4cbb95b2cdaa68ecc221ee15c67a9a/blscurve/miracl/common.nim#L649-L674

func fromBytes*(res: var ECP2_BLS12381, data: openarray[byte]): bool =
  ## Unserialize ECP2(G2) point from array of bytes ``data``.
  ##
  ## This procedure supports only compressed form of serialization.
  ##
  ## Returns ``true`` on success, ``false`` otherwise.
  if len(data) >= MODBYTES_384 * 2:
    if (data[0] and (1'u8 shl 7)) != 0:
      if (data[0] and (1'u8 shl 6)) != 0:
        # Infinity point
        res.inf()
        result = true
      else:
        var buffer: array[MODBYTES_384, byte]
        var x1, x0: BIG384
        let greatest = (data[0] and (1'u8 shl 5)) != 0'u8
        copyMem(addr buffer[0], unsafeAddr data[0], MODBYTES_384)
        buffer[0] = buffer[0] and 0x1F'u8
        if x1.fromBytes(buffer):
          copyMem(addr buffer[0], unsafeAddr data[MODBYTES_384], MODBYTES_384)
          if x0.fromBytes(buffer):
            var x: FP2_BLS12381
            x.fromBigs(x0, x1)
            if res.setx(x, greatest) == 1:
              result = true

In any event, ensuring all points are validated correctly prior to processing remains a constant challenge as the code evolves. Further, note that that this validation is expensive: point multiplication by the cofactor which is a 125-bit integer.

Exploit Scenario

The original parsePubkey() function cited above demonstrates the potential for missing validation checks. As stated in the referenced standard, the pairing operation is undefined when its input points are not in the prime-order subgroups of E1 and E2. The resulting behavior is unpredictable and may enable forgeries.

Mitigation Recommendation

The recommendation is to consider placing the point (subgroup) validation check at the location of deserialization immediately after the code shown above. The two routines could perform point validation by default but skip validation as necessary when given an optional flag. The point structure could include a flag indicating the check has been performed, so that algorithms can confirm the check (rather than repeat it) or execute it in a lazy fashion (though this risks mutability issues). Alternatively, a validated point could be given a different type to cleanly separate/indicate its status, e.g. unverifiedPubKey and verifiedPubKey. Overall, these approaches would allow the removal of a variety of potentially redundant and expensive checks in the BLS signature functionality and increase performance as well as robustly ensuring all points are validated prior to processing. Finally, detecting errors early at ingestion improves debug and attribution capabilities.

@eschorn1 eschorn1 added enhancement New feature or request difficulty:high Difficulty of exploitation severity:low Overall issue severity status:reported nbc-audit-2020-2 🛂 labels Sep 21, 2020
@mratsim
Copy link
Contributor

mratsim commented Sep 22, 2020

Further, note that that this validation is expensive: point multiplication by the cofactor which is a 125-bit integer.

On G2, the cofactor is over 500 bits so checking signature is even worse

# Parameters
x = -(2^63 + 2^62 + 2^60 + 2^57 + 2^48 + 2^16)
p = (x - 1)^2 * (x^4 - x^2 + 1)//3 + x
r = x^4 - x^2 + 1
t = x + 1
print('p  : ' + p.hex())
print('r  : ' + r.hex())
print('t  : ' + t.hex())
print('p (mod r) == t-1 (mod r) == 0x' + (p % r).hex())

# Finite fields
Fp       = GF(p)
K2.<u>  = PolynomialRing(Fp)
Fp2.<beta>  = Fp.extension(u^2+1)

# Curves
b = 4
SNR = Fp2([1, 1])
G1 = EllipticCurve(Fp, [0, b])
G2 = EllipticCurve(Fp2, [0, b*SNR])

# https://crypto.stackexchange.com/questions/64064/order-of-twisted-curve-in-pairings
# https://math.stackexchange.com/questions/144194/how-to-find-the-order-of-elliptic-curve-over-finite-field-extension
cofactorG1 = G1.order() // r
cofactorG2 = G2.order() // r

print('')
print('cofactor G1: ' + cofactorG1.hex())
print('cofactor G2: ' + cofactorG2.hex())
print('cofactor G1 bitlength: ' + str(int(cofactorG1).bit_length()))
print('cofactor G2 bitlength: ' + str(int(cofactorG2).bit_length()))
print('')
p  : 1a0111ea397fe69a4b1ba7b6434bacd764774b84f38512bf6730d2a0f6b0f6241eabfffeb153ffffb9feffffffffaaab
r  : 73eda753299d7d483339d80809a1d80553bda402fffe5bfeffffffff00000001
t  : -d20100000000ffff
p (mod r) == t-1 (mod r) == 0x73eda753299d7d483339d80809a1d80553bda402fffe5bfe2dfefffeffff0001

cofactor G1: 396c8c005555e1568c00aaab0000aaab
cofactor G2: 5d543a95414e7f1091d50792876a202cd91de4547085abaa68a205b2e5a7ddfa628f1cb4d9e82ef21537e293a6691ae1616ec6e786f0c70cf1c38e31c7238e5
cofactor G1 bitlength: 126
cofactor G2 bitlength: 507

Implementation

With BLST backend, subgroup checks uses Bowe19 endomorphisms https://eprint.iacr.org/2019/814

With MIRACL we use naive multiplication by the cofactor but this backend is used as a fallback i.e. for our main targets x86-64 and ARM64 subgroup checks is cheap enough

Bowe19 subgroup checks

In spec form they are pairingwg/bls_standard#21

Sean Bowe shows how to check subgroup membership more quickly than exponentiation by the group order.

This post quickly summarizes the results as pseudocode.

TODO: should subgroup testing be a required part of deserialization? Sean points out (in personal communication) that G2 subgroup checks are necessary, because the pairing operation is undefined otherwise. So probably it makes sense to just require subgroup checks for both G1 and G2.

For G1, define the endomorphism sigma as

sigma(x, y)

Input: a point P = (x, y)
Output: a point Q = (x', y')

Constants:
1. beta = 0x1a0111ea397fe699ec02408663d4de85aa0d857d89759ad4897d29650fb85f9b409427eb4f49fffd8bfd00000000aaac

Steps:
1. x' = x * beta
2. y' = y
2. return (x', y')

Then, to check subgroup membership, test the following:

subgroup_test_g1(x, y)

Input: a point P
Output: True if P is in the order-q subgroup of E1, else False

Constants:
- c1 = (z^2 - 1) / 3 (in the integers), where z = -0xd201000000010000
- point_at_infinity_E1 is the point at infinity on E1

Steps:
1. sP = sigma(P)
2. Q = 2 * sP
3. sP = sigma(sP)
4. Q = Q - P
5. Q = Q - sP
6. Q = c1 * Q
7. Q = Q - sP
8. return Q == point_at_infinity_E1

For G2, let psi(P) be the "untwist-Frobenius-twist" endomorphism given by Galbraith and Scott in Section 5 of GS08. Then to test subgroup membership, check the following:

subgroup_test_g2(P)

Input: a point P
Output: True if P is in the order-q subgroup of E2, else False

Constants:
- z = -0xd201000000010000
- point_at_infinity_E2 is the point at infinity on E2

Steps:
1. pP = psi(P)
2. pP = psi(pP)
3. Q = P - pP
4. pP = psi(pP)
5. pP = z * pP
6. Q = Q + pP
7. return Q == point_at_infinity_E2

Analysis

With endomorphism acceleration, those subgroup checks becomes dominated by scalar multiplications c1 * Q on G1 and z * pP which are 64 bits with a Hamming Weight of 6, which accelerates checks by almost 2 on G1 and almost 7.5x on G2.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
difficulty:high Difficulty of exploitation enhancement New feature or request nbc-audit-2020-2 🛂 severity:low Overall issue severity status:reported
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants