-
Notifications
You must be signed in to change notification settings - Fork 10
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Change capping algorithm to reduce the likelihood of burn share going to the Legacy Burning Man #412
Comments
Great thanks for the proposal and analysis! Do you can think of a case where the iteration would run endlessly? I guess not (it reminded me on the fee estimation problem but that's different as the input selection can change with adjusted fees, thus in some cases would never stop the adjustment iteration). |
@HenrikJannsen There shouldn't be a case where the iteration runs endlessly, since the set of uncapped contributors is decreasing by at least one element each round, so there should be no more rounds than active burning men. It should be possible to calculate all the capped burn shares in n log(n) time or better. I imagine the new capping algorithm could be implemented indirectly by sorting the list of contributors in reverse order of the ratio of uncapped burn share to cap size, from biggest to smallest, then removing elements from the list one-at-a-time so long as the adjusted burn share of that contributor is bigger than his cap. For each contributor removed, increase the uniform scale factor (starting from 1) so that the sum of the adjusted burn shares of the remaining contributors in list remains at 100% (without computing the sum over and over, since one just subtracts each share from the total). Then the contributors remaining in the list (if any) are the ones who are uncapped, with their shares adjusted up uniformly by the final scale factor, and the LBM getting a nonzero share whenever the list is iterated through to the end. (I think one potential issue is that with lots of capping rounds/iterations, the final scale factor for the uncapped burn shares could wind up enormous in some cases, so that burning a tiny amount like the current dust limit of 5.46 BSQ, say on behalf of an inactive burning man, could give him up to an 11% burn share. But in theory that could still happen with the present algorithm.) I don't think it will cause issues that DPT outputs have a minimum amount, since the changes probably would not touch |
Hi @stejbac I have marked this as approved. is this something you can implement? |
@pazza83 Yes, sure. I can start work on that shortly. (Sorry, I've not been active over the last few months. I intend to start contributing to Bisq again this month.) |
Add an inner class to 'BurningManServiceTest' to test the cap shares & (capped + uncapped) burn shares allocated to each candidate found upon calling 'BurningManService.getBurningManCandidatesByName'. To this end, set up mocks of 'DaoStateService' and the other injected dependencies of 'BurningManService' to return skeletal proof-of-burn & compensation issuance txs & payloads for the test users supplied for each case. Ensure the test cases exercise the capping algorithm fairly thoroughly, which is to be changed in the proceeding commits, per the proposal: bisq-network/proposals#412 In particular, provide test cases where more than two capping rounds would be applied by the new algorithm, as opposed to the current algorithm which always applies at most two. (This leads to strictly lower shares for the Legacy Burning Man and non-strict increases in the shares of all the other burning men.)
Change the algorithm used to adjust & cap the burn share of each BM candidate to use an unlimited number of 'rounds', as described in: bisq-network/proposals#412 That is, instead of capping the shares once, then distributing the excess to the remaining BM, then capping again and giving any excess to the Legacy Burning Man, we cap-redistribute-cap-redistribute-... an unlimited number of times until no more candidates are capped. This has the effect of reducing the LBM's share and increasing everyone else's, alleviating the security risk of giving too much to the LBM (who is necessarily uncapped). Instead of implementing the new algorithm directly, we simply enlarge the set of candidates who should be capped to include those who would eventually be capped by the new algorithm, in order to determine how much excess burn share should go to the remaining BM. Then we apply the original method, 'candidate.calculateCappedAndAdjustedShares(..)', to set each share to be equal to its respective cap or uniformly scaled upwards from the starting amount accordingly. To this end, the static method 'BurningManService.imposeCaps' is added, which determines which candidates will eventually be capped, by sorting them in descending order of burn-share/cap-share ratio, then marking all the candidates in some suitable prefix of the list as capped, iterating through them one-by-one & gradually increasing the virtual capping round (starting at zero) until the end of the prefix is reached. (The method also determines what the uncapped adjusted burn share of each BM should be, but that only affects the BM view & burn targets.) In this way, the new algorithm runs in guaranteed O(n * log n) time. To prevent failed trades, the new algorithm is set to activate at time 'DelayedPayoutTxReceiverService.PROPOSAL_412_ACTIVATION_DATE', with a placeholder value of 12am, 1st January 2024 (UTC). This simply toggles whether the for-loop in 'imposeCaps' should stop after capping round 0, since doing so will lead to identical behaviour to the original code (even accounting for FP rounding errors).
Last time I ran with master, had a failed trade due to this being active already, vs trading peer which was using prior algo. Should the activation date for this be pushed forward again, and if so what should the date be? @alvasw |
The present capping algorithm
The current algorithm for determining the percentage of trade fee and DPT revenue going to each Burning Man subjects their (decayed) burn shares to a cap, determined by their respective (decayed) issuance shares. Specifically, this cap is the minimum of 11% and the decayed issuance share times 10 (the boost factor), so a contributor with X% decayed issuance has a capped burn share of no more than 10 * X% if X is less than 1.1 and no more than 11% otherwise. The limit of 11% payout per contributor is intended to prevent him from profiting by carrying out a bunch of trades (say as the buyer) and driving then into arbitration to get the DPT payout shares, since this will be less than the security deposit he paid, which is at least 15% of the trade amount (>11% of the total escrow amount).
In the event that some of the contributors get their shares capped, the sum of all the capped and uncapped shares will be less than 100%, so in this case the present algorithm scales all the remaining (uncapped) shares up by a uniform amount, so that the total remains at 100% (effectively distributing burn share evenly from the capped contributors to the uncapped contributors). This creates the problem that some of the adjusted (scaled up) burn shares could now themselves exceed their respective caps.
To solve this problem, the present algorithm applies a second round of capping as needed, making sure that none of the adjusted shares exceed their respective caps. But now the sum of all the shares will be less than 100%, so in that (hopefully rare) case, the remainder is set to go to the Legacy Burning Man. Note that there is no further cap in that case, and the LBM could theoretically get any amount of the revenue/payouts up to 100%.
There are two problems with the capping algorithm as it stands:
It is not as mathematically elegant as it could be. In particular, the (multivariate) function from tuples of uncapped burn shares to tuples of capped burn shares is not continuous. That is, a tiny perturbation in the uncapped burn percentages can sometimes lead to a large perturbation in the final capped burn percentages. To see this, suppose that there are two contributors A and B at or close to their respective caps, while the other contributors are far below their respective caps. If we suppose A is well above his cap and B is just slightly above his, then A and B will be capped in the first round and everyone else's shares will be adjusted upwards by the factor,
If OTOH B is slightly below his cap, then A will be capped in the first round, B will be capped in the second round and everyone else's shares will be adjusted upwards by the smaller factor,
Thus there is a big jump downwards in the adjustments of everyone else's shares, with the remainder going to the LBM.
By burning a large amount of BSQ on other contributors' behalf, it is possible for a corrupt LBM to manipulate the burn shares to result in a final DPT payout percentage of far greater than 11% going to him, which would allow him to profitably steal money from the traders by sweeping the offer book and driving the resultant trades into arbitration. This attack is explained below.
A hypothetical attack by the Legacy Burning Man.
Any Bisq user can burn BSQ on behalf of any other contributor to increase their burn share, since the decayed burn amounts are nothing more than ordinary proofs of burn with the hash of the contributor name in the
OP_RETURN
data field. Under the present capping algorithm, this opens up an attack vector by the LBM to give himself a share greater than 11%. To do this, he can pick a tiny contributor T with a negligible amount of the decayed issuance share, as well as several largish contributors A, B, C,..., each with say at least 1.1% of the decayed issuance share, so that their caps are all large, say around the maximum of 11%. He then burns a large amount on behalf of each of the large contributors, to push up their burn shares to around double their respective caps, say around 22% each. Finally, the LBM burns a very large amount on behalf of the tiny contributor T, taking him well past his cap and more than doubling the total decayed burn amount. This roughly halves everyone else's burn shares and in particular pushes A, B, C,... back down to just under their caps. Now under the present algorithm, T gets capped in the first round and A, B, C,... get adjusted back up to a 22% burn share each. Then in the second round, they are capped at 11% each with the remaining 11% going to the LBM. In this way, the LBM could get much more than 11% total final share, say even 33% or more.In more concrete terms, suppose there is a total decayed burn amount of 250,000 BSQ -- this is close to the actual decayed burn amount as of the end of March 2023. Then the LBM could pick a tiny contributor T with a negligible issuance share (and cap) and three large contributors A, B and C, with decayed issuance share at least 1.1% each and thus caps of 11% each. He then burns up to 250,000 * (1/0.34 - 1) / 3 ~= 161,765 BSQ on behalf of A, B and C each (though likely substantially less if they already have nonzero decayed burn amounts), to take each of them up to just under 22% burn share. This increases the total decayed burn amount to no more than 735,294 BSQ. Then the LBM burns another <735,294 BSQ on behalf of T to double the total and push the uncapped amounts of A, B and C back down to just under 11% each. After the second capping round, this leaves the LBM with at least 33% of the final DPT payout share, to complete his attack by sweeping the offer book and driving as many trades as possible into arbitration. So the LBM needs up to 250,000 * (2/0.34 - 1) ~= 1,220,588 BSQ (plus BTC for security deposits) to carry out his attack, though likely substantially less.
(There are probably variants to the above example that are more efficient and don't need anywhere near 1.22 million BSQ.)
Proposed capping algorithm modification
To mitigate the two problems above, I propose applying an unlimited number of capping rounds, instead of just two. In each round, any (possibly already adjusted) shares that exceed their respective cap are capped and the subtracted share is distributed to the remaining uncapped contributors who are active burning men, if any, by further adjusting their shares upwards by a uniform scaling factor to reach a total of 100%. In the next round, some of those contributors may end up having exceeded their cap, so the process repeats with the set of uncapped contributors shrinking each round, until the set winds up empty. This will either be because every active burning man has been capped, in which case any remaining burn share goes to the LBM, or no further capping is necessary. In the latter case, some of the contributors will be capped and the remaining contributors (at least those with nonzero decayed burn amounts) will have had their shares scaled upwards uniformly to reach a total (capped + uncapped) of 100%, with the LBM getting nothing.
Thus with the proposed algorithm, the LBM gets a nonzero share (which could exceed 11%) precisely when the sum of the caps of all the active burning men is less than 100%. This will hopefully be rare, as preventing it only requires 9 or 10 active BM with a decayed issuance share greater than 1.1% each (or a large number of active BM with somewhat smaller issuance shares).
Note that the percentage going to the LBM (if any) is independent of the decayed burn amounts, under the modified algorithm, and only depends on the decayed issuance shares of the active burning men. Thus it is impossible for the LBM to increase their percentage by burning on behalf of other contributors -- they can only manipulate it downwards by burning on behalf of not yet active burning men, thus increasing the sum of the caps of the active burning men.
As of the end of March 2023, no one is being capped and certainly no contributor is being capped in the second round, so the algorithm change would hopefully not cause too much disruption. Perhaps the number of capping rounds could be increased from two to unlimited after a certain block height is reached.
The text was updated successfully, but these errors were encountered: