Aggregating t-out-of-n threshold signatures typically involves (1) computing Lagrange coefficients and (2) doing a multi-exponentiation with them over the signature shares.
(1) is always done *naively* in O(t^2) time
(2) is (more-or-less) O(t) time
This does not scale well ๐ข๐
Why the inefficient O(t^2) time? Because of the difficulty of computing the denominators of each Lagrange coefficient L_i^P(0) below, which requires evaluating t different products of the form \prod_{j\in P, j\ne i} (i-j), where P is the set of signers that we aggregate from.
Our work (1) makes computing all Lagrange denominators efficient in O(t log^2 t) time via a polynomial multipoint evaluation, (2) implements it efficiently, and (3) applies to BLS.
Generally, it applies to any Lagrange-based threshold cryptosystem.
See ieeexplore.ieee.org/document/9152696
Why do we care? For @Aptos_Network randomness ๐ฒ , we must do a threshold signature in the PoS setting! This involves (1) rounding validator stakes down to small weights, (2) assigning each validator as many shares as they have weight & (3) doing a *weighted* threshold signature!
Even with careful rounding of stakes, the total weight (i.e., # of signers in the threshold scheme) can be anywhere from 1,000 to 10,000 (cc
@chelseakomlo), which will be rather slow.
So, our 2020 work allows us to get *much* better performance ๐
Improvement ranges anywhere from 2x to 10x at our scales.
See some concrete numbers below ๐