-
Notifications
You must be signed in to change notification settings - Fork 78
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Investigate possibility of reducing 10-blocks lock #102
Comments
I've just checked my node's logs as far back as 2021-08-31 (I don't have older logs), and the largest reorg was 3 blocks deep:
|
Thanks @SChernykh, would be very useful to know if that reflects the situation for the entire history of Monero or at least the last years. It's already very good that the largest reorg in the last year was only 3 blocks deep. |
I don't have logs anymore, but I vaguely remember seeing some bigger reorgs before, but they never reached 10 blocks. Unfortunately I don't remember if they reached 8 blocks. |
I'm okay with >0 reorgs that are 10+ blocks long. We can't avoid this issue without having an infinite funds lock time, which is impractical. Thus, we're always picking a reasonable value. Lowering the block time could have the following downsides:
Speaking completely without any numbers (:p), I would like to set a goal of cutting the lock time in half to 5. |
This issue was discussed at the MRL meeting: |
This is very encouraging, as we didn't get even close to half the current block limit in 5 years. I would say the first 3 years have relatively less importance, since the network was much smaller and not yet robust. Are there attack vectors that could end up creating reorgs (beside a 50%+1 attack, which is out of scope)? That seems to be the main issue, given that during normal operation of the network reorgs were never more than 3 blocks deep.
@moneromooo-monero mentions smooth was strongly against lowering the blocks to <10 some time ago. Do we know the reasoning? |
@hyc could you share an extract of your Linode logs that includes reorg events, their time and their depth? seeing how the size and the frequency of reorgs changed over time, maybe if there is a general trend or a correlation with hash rate/upgrades/etc, could help making an informed decision about what level of reorg probability we are comfortable with. |
http://highlandsun.com/hyc/reorgs.txt currently contains 1219 lines, 160KB. All but four reorgs are size 2, four are size 3. But I'd be dubious about using this as a metric in deciding how to change the lock. If we reduce it significantly below 10, that will incentivize attackers to try to double-spend. |
thank you! from lines 395 to 1076 (both inclusive), there are only a few dates for all entries, which seems to be something other than when the reorg happened (maybe that's when the log file was exported). do you see a way to assign actual dates to those? also, among those records (so the date is not indicative), I've found a real outlier, supposedly a 26-block reorg (L751):
does anyone remember this occurrence? (I don't.)
I agree in that the maximum 3 block depth in this data, presuming there were malicious attempts among those, doesn't mean that a longer-range attack is uneconomic for current potential attackers. but they are always incentivized to some extent to doublespend, even now. only an infinite lock period can remove this incentive. it's that with a shorter one, they are incentivized more. 10 blocks has worked out so far, that's valuable empirical data. reducing it to let's say 4 would be clearly reckless, but I do wonder if it would be safe to go below 10. it's possible that there is no way to know the answer. other than this data set, the only useful source I can think of right now is making a minor reduction in the lock time, collecting reorg data in that environment for an extended period, and then reassessing. since in general there is no specialized mining hardware, I would assume the wide availability of hardware ready to enter mining, so this decision definitely needs to be made with caution. |
here are some quick-and-dirty visualizations that no one asked for from hyc's data. I know this reflects only a single node's view of the network, but this is the best I've encountered so far. absent further clarification, I excluded reorgs that didn't have a definitive date to them (lines 395-1076, inclusive), because obviously I can't plot them on a time axis without knowing when they occurred. this set amounts to ~56% of the total registered reorgs ( all reorgs with a definitive date have a reported depth of 2 blocks. the first chart shows how many reorgs were picked up per month (2019-11--2023-03), overlaid with the hash rate from Bitinfocharts. the amount of reorgs seems to strongly correlate with the hash rate. Carbon Chameleon's PoW change is responsible for the large hash rate jump there. a reduction in reorgs seems to co-occur with it. this can't be clearly stated because some of the "dateless" reorgs may have happened between Carbon Chameleon and 2020-12-25, though I'm not sure that even all 77 "dateless" reorgs could erase that significant reduction. the correlation seems to weaken after Fluorine Fermi and the amount of reorgs falls significantly. I wonder if these major reductions are due to networking updates in the forks or something else. the second chart is a shot at visualizing all reorgs with a definitive date (2019-10-29--2023-04-20, inclusive), each reorg being a dot and the Y axis showing how many hours elapsed since the last reorg, sort of a "time between this chart may not be as useful, but it seems that the time between reorgs kept improving after Fluorine Fermi against a relatively stable hash rate. these are all subjective observations, I'm sure that with proper statistical methods one can glean better insights. |
the noncesense lab has stuff on re-orgs. https://github.com/noncesense-research-lab/archival_network I don't know where the actual data or results are though. |
thanks @Gingeropolous. I guess the relevant data is buried in /raw_log_dumps. it would be interesting to see if it matches up with hyc's data, but for now there are serious doubts that a reduction could be done safely, no matter how good the recent results are (see the chat log of today's MRL meeting). edit (2024-03-29): I noticed the above logs have @hyc saying it's dangerous to go below 10 blocks, but his precise reasoning happened outside the meeting hours. I'll offer a summary from memory, which you may be able to find in the full MRL chat logs, unfortunately I couldn't (but it happened around those days): he basically said the risk of a reorg will increase unpredictably by reducing the lock period. years of experience show that 10 is safe, but reducing even just to 9 may cause catastrophic failures. it's like you're standing blindfolded near a precipice, and you know that the edge is at most 10 steps away, but you can't see where exactly. any step forward may cause your downfall. |
(correct link: #95 (comment)) I'm far from having processed the whole problem space, but I think it's a smart proposal, at the cost of protocol complexity that's not negligible nor crushing. I encourage reading it and comments after it through 2022-12-29, they help understanding its strengths and weaknesses. the proposal is in the 10-blocks lock elimination thread, but in practice it can only reduce it, hence relevant here ("in practice you would only select decoys from blocks older than some threshold larger than the average time it takes to propagate a double-spend report"). I like that it can (almost always) work without affecting transaction uniformity. it still troubles me that it requires assumptions about network behavior that we can't estimate the probability of, nor do we have enough historical data/experience about. but my takeaway is that it may be impossible to reduce the length of the lock without such assumptions. going that route can lead to breaking aspects of the network. to move forward, I think we'd need a way to quantify the probability of transaction invalidation through a reorg with the current protocol, and see if an alternative model can reliably bring the same level with certain parameters (if any). off-topic: the conservative in me wants to see the network as robust as possible and solve fast spendability rather on a layer-2. on the other hand, payment channels on Monero are currently only theoretical (I know about five papers and that's it) and rollups haven't even been seriously theorized. it would also prove useful to not fragment the network into layers. |
potentially useful regarding quantification (h/t to Rucknium for finding it): |
Probably a very dumb question, but as a regular user I'm going to ask it anyway. Why not move stage net or test net to say eight or nine blocks and run it and see what happens? After all, isn't that what they are their for, to test experiments with? Either that or maybe Wownero may be willing to try it out |
@shortwavesurfer2009 not a dumb question. in my view, that wouldn't help a lot for the following reasons:
|
FYI, this proposal should be considered obsolete because it can't work with full-chain membership proofs (FCMPs). With FCMPs, the decoy set will be defined by a reference block and will contain all outputs created up to (and including) that block. Zcash works nearly the same way (they call their reference block an "anchor"). Interestingly, they don't have a consensus-mandated lock time and simply expect users to resubmit any transactions invalidated by reorgs. Some zcash wallets have a voluntary lock. Here is the relevant discussion: zcash/zcash#1614 |
Success probability of a double-spend attack with minority hashpower shareTL;DR: The probability of a successful double-spend attack using a minority of hashpower is computed. The attack success probability is very nonlinear with respect to the attacker's hashpower share. When a potential victim waits 10 blocks and the attacker has 10 percent of hashpower, the attack success probability is negligible. When a potential victim waits 10 blocks and the attacker has 30 to 40 percent of hashpower, the attack success probability is pretty high. Below I analyze two attack strategies. The first is the classic attack analyzed by Satoshi Nakamoto in the bitcoin white paper. The classic attack has a single cycle: mine an attacking chain until it outpaces the honest chain or until the end of time. The second attack strategy mines an attacking chain until the "full confirmation" block depth is reached on the honest chain. If the attacking chain is longer at that point in time, the attacker broadcasts the chain and the attack is a success. If not, the attacker tries to restart the attack cycle. When a potential victim waits 10 blocks and the attacker has 10 percent of hashpower, the classic Nakamoto attack has a 0.0008 percent probability of success. With 30 and 40 percent of hashpower, the attacker has a 6.5 and 37.2 percent, respectively, probability of success in the same scenario. If the attacker uses the second strategy, an attacker possessing 10 percent of hashpower for 12 years will be able to execute a successful attack with 50 percent probability if the potential victims waits for 10 blocks for full confirmation. An attacker possessing 30 and 40 percent of hashpower will need to possess the hashpower for 8.6 and 1.5 hours, respectively, to achieve a 50 percent attack success probability. In my opinion, an attacker that could pay a roughly linear cost to acquire a linear amount of hashpower would find it advantageous to go big or go home. There is no reason to pay for only 10 percent of hashpower to get negligible success probability when the attacker could be guaranteed success if they pay for 51+ percent of hashpower. It could be reasonable to assume that such an adversary would not attempt a minority hashpower attack and therefore only the probability of benign blockchain re-orgs should be considered when analyzing the best N for the N block lock. However, the "linear cost" attacker isn't the only potential threat actor. A hacker or malicious insider could acquire control over the block templates of a mining pool. This threat actor would not have linear cost to acquire malicious hashpower. Instead, their malicious hashpower share would be fixed at the aggregate hashpower of miners who hash on block template from the mining pool operator. As of this writing, the top mining pool routinely possesses 30 to 40 percent of Monero's hashpower. See https://miningpoolstats.stream/monero for current hashpower share of the major mining pools. Strategy 1: Double-spend attack success probability of the classic Nakamoto attackThe bitcoin white paper describes attempted double-spend attacks when the attacker has a minority hashpower share and the potential victim waits Theorem 1 of Grunspan & Perez-Marco (2018) states: Let where Below is a table of attack success probabilities based on Theorem 1. Columns are the hashpower share of the adversary. Rows are the number of mined blocks that the victim waits before considering a transaction "confirmed". Cells are the attack success probability, in percent.
R code to reproduce the table: # install.packages("zipfR")
# install.packages("knitr")
# Rows: 1 to 30 number of blocks to wait
# Column: hashpower share: 5%, 10%, 20%, 30%, 40%, 45%
# Based on Theorem 1 of Grunspan & Perez-Marco (2018). "Double spend races."
n.blocks <- 30
hashpower.share <- c(0.05, 0.10, 0.20, 0.30, 0.40, 0.45)
blocks <- 1:n.blocks
results <- matrix(0, nrow = n.blocks, ncol = length(hashpower.share))
for (i in seq_along(hashpower.share)) {
q = hashpower.share[i]
p = 1 - q
results[, i] <- zipfR::Rbeta(x = 4*p*q, a = blocks, b = 1/2)
}
results <- 100 * results # Convert to percentage
rownames(results) <- as.character(blocks)
comparison.block <- 10
results.comparison <- results
for (j in seq_len(ncol(results.comparison))) {
divisor <- results[blocks == comparison.block, j]
results.comparison[, j] <- results.comparison[, j] / divisor
}
knitr::kable(results, format = "pipe", row.names = TRUE,
col.names = hashpower.share, digits = 5)
Strategy 2: Required possession duration of malicious hashpower for successful double-spend attack with a
|
0.05 | 0.1 | 0.2 | 0.3 | 0.4 | 0.45 | |
---|---|---|---|---|---|---|
1 | 0.389 | 0.097 | 0.028 | 0.010 | 0.007 | 0.007 |
2 | 2.800 | 0.382 | 0.058 | 0.019 | 0.010 | 0.010 |
3 | 18.303 | 1.350 | 0.117 | 0.031 | 0.013 | 0.013 |
4 | 114.393 | 4.586 | 0.233 | 0.053 | 0.024 | 0.015 |
5 | 695.467 | 15.128 | 0.450 | 0.072 | 0.028 | 0.018 |
6 | 4148.646 | 48.833 | 0.833 | 0.104 | 0.032 | 0.021 |
7 | 24406.850 | 155.156 | 1.512 | 0.143 | 0.036 | 0.024 |
8 | > 100 yrs | 486.719 | 2.733 | 0.201 | 0.040 | 0.026 |
9 | > 100 yrs | 1511.525 | 4.857 | 0.268 | 0.058 | 0.029 |
10 | > 100 yrs | 4654.446 | 8.536 | 0.360 | 0.064 | 0.032 |
11 | > 100 yrs | 14229.972 | 14.896 | 0.482 | 0.069 | 0.035 |
12 | > 100 yrs | > 100 yrs | 25.778 | 0.637 | 0.075 | 0.038 |
13 | > 100 yrs | > 100 yrs | 44.342 | 0.853 | 0.101 | 0.040 |
14 | > 100 yrs | > 100 yrs | 75.825 | 1.112 | 0.108 | 0.043 |
15 | > 100 yrs | > 100 yrs | 128.989 | 1.444 | 0.115 | 0.046 |
16 | > 100 yrs | > 100 yrs | 218.417 | 1.860 | 0.146 | 0.072 |
17 | > 100 yrs | > 100 yrs | 368.317 | 2.390 | 0.154 | 0.076 |
18 | > 100 yrs | > 100 yrs | 618.781 | 3.050 | 0.162 | 0.081 |
19 | > 100 yrs | > 100 yrs | 1036.086 | 3.911 | 0.200 | 0.085 |
20 | > 100 yrs | > 100 yrs | 1729.467 | 4.964 | 0.210 | 0.089 |
21 | > 100 yrs | > 100 yrs | 2878.854 | 6.287 | 0.219 | 0.093 |
22 | > 100 yrs | > 100 yrs | 4779.739 | 7.976 | 0.261 | 0.097 |
23 | > 100 yrs | > 100 yrs | 7916.962 | 10.064 | 0.272 | 0.101 |
24 | > 100 yrs | > 100 yrs | 13084.633 | 12.692 | 0.319 | 0.106 |
25 | > 100 yrs | > 100 yrs | 21581.438 | 15.944 | 0.332 | 0.110 |
26 | > 100 yrs | > 100 yrs | 35528.583 | 19.992 | 0.382 | 0.114 |
27 | > 100 yrs | > 100 yrs | > 100 yrs | 25.056 | 0.396 | 0.118 |
28 | > 100 yrs | > 100 yrs | > 100 yrs | 31.344 | 0.451 | 0.122 |
29 | > 100 yrs | > 100 yrs | > 100 yrs | 39.124 | 0.467 | 0.126 |
30 | > 100 yrs | > 100 yrs | > 100 yrs | 48.769 | 0.525 | 0.131 |
R code to reproduce the table above is:
# install.packages("zipfR")
# install.packages("knitr")
Pr_meta <- 0.5
n.blocks <- 30
hashpower.share <- c(0.05, 0.10, 0.20, 0.30, 0.40, 0.45)
z <- 1:n.blocks
results <- matrix(0, nrow = n.blocks, ncol = length(hashpower.share))
for (i in seq_along(hashpower.share)) {
q <- hashpower.share[i]
Iq <- zipfR::Rbeta(x = q, a = z, b = z)
results[, i] <- ceiling(ceiling(log(1 - Pr_meta)/(log(1-Iq))) * (z + (1-q)/q))
}
results <- results / (30*24) # Convert to days
rownames(results) <- as.character(blocks)
results[ (! is.finite(results)) | results >= 365.25 * 100] <- NA
options(knitr.kable.NA = paste0("> 100 yrs"))
knitr::kable(results, format = "pipe", row.names = TRUE,
col.names = hashpower.share, digits = 3)
References
Grunspan, C., & Perez-Marco, R. (2018). "Double spend races." Int. J. Theor. Appl. Finance, 21(8).
Hinz, J. (2020). "Resilience Analysis for Double Spending via Sequential Decision Optimization." Applied System Innovation, 3(1), 7.
Jang, J. & Lee, H.-N. (2020) "Profitable Double-Spending Attacks." Appl. Sci., 10, 8477.
Nakamoto, S. (2008). "Bitcoin: A Peer-to-Peer Electronic Cash System".
Rosenfeld, M. (2014). "Analysis of Hashrate-Based Double Spending".
The 10-blocks lock is the most problematic aspect of Monero, which heavily impacts its usability as a currency and the user experience of people using services based on Monero. #95 explores the possibility of removing the lock, which would be the optimal solution, but the problem is not simple to resolve and the total removal might happen in a far future.
It worth considering if there are the premises for reducing the lock to a smaller amount of blocks, without impacting the security and stability of the network. A couple of questions to get the conversation started:
Note that even a reduction from 10 to 8 blocks would be a significant UX improvements for Monero users.
The text was updated successfully, but these errors were encountered: