You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We are seeing some interesting results with these corrected benchmarks in #409 : verify function is not strictly increasing as you increase number of variables of the ML poly! Interestingly, we're only seeing these for BN-254 and not for BLS12-381.
Here are the results on a machine with 16 cores/32 threads:
verify_kzg_range_BN_254/12
time: [2.8977 ms 2.9282 ms 2.9595 ms]
verify_kzg_range_BN_254/14
time: [3.5833 ms 3.6588 ms 3.7354 ms]
verify_kzg_range_BN_254/16
time: [3.8076 ms 3.9259 ms 4.0450 ms]
verify_kzg_range_BN_254/18
time: [3.7091 ms 3.8270 ms 3.9492 ms]
verify_kzg_range_BN_254/20
time: [3.6729 ms 3.7214 ms 3.7745 ms]
verify_kzg_range_BN_254/22
time: [3.8981 ms 3.9363 ms 3.9794 ms]
Note: on a MacBook Air with 8 cores, I am seeing a strictly increasing relationship for both curves.
I also benchmarked the key function that I was expecting to be the most expensive, multi_pairing and that one is increasing with a pretty linear relationship between num_vars and time. I suppose that what's causing the difference above is preparing the points for multi_pairing, then.
Do you have any intuition for why this could be happening, namely why point preparation seems to be DECREASE for larger values of n, and only in highly parallel environments? Maybe this can offer some insights for potential optimization too :)
The text was updated successfully, but these errors were encountered:
We are seeing some interesting results with these corrected benchmarks in #409 :
verify
function is not strictly increasing as you increase number of variables of the ML poly! Interestingly, we're only seeing these for BN-254 and not for BLS12-381.Here are the results on a machine with 16 cores/32 threads:
Note: on a MacBook Air with 8 cores, I am seeing a strictly increasing relationship for both curves.
I also benchmarked the key function that I was expecting to be the most expensive,
multi_pairing
and that one is increasing with a pretty linear relationship between num_vars and time. I suppose that what's causing the difference above is preparing the points for multi_pairing, then.Do you have any intuition for why this could be happening, namely why point preparation seems to be DECREASE for larger values of
n
, and only in highly parallel environments? Maybe this can offer some insights for potential optimization too :)The text was updated successfully, but these errors were encountered: