-
Notifications
You must be signed in to change notification settings - Fork 119
merge_algorithm causes failed transactions #446
Comments
Agreed. The current protocol is unable to handle large numbers of UTXOs and that causes all kinds of problems. Linking to #171. |
@raedah I don't think max_inputs is a good target for direct configuration. Shouldn't we calculate that based on the timeout parameter we expect takers to configure, and our own rate limiting? Quoting from IRC:
If the default maker timeout is currently 60 (or is it 30?), the current rate of 66 bytes per second suggests limiting input size to Aside: playing around with input-output linking for CJHunt suggests to me that including more inputs (ideally also multiple change addresses, but let's cross one bridge at a time) causes algorithmic complexity headaches for analysts. |
@adlai What do you mean by "the current rate of 66 bytes per second"? If you were referring to the 67 in that chat excerpt, it's not relevant, that was an estimate for xchat (that was completely wrong, @chris-belcher worked it out from the source ,it seems to be 120). Meanwhile the current JM throttling code is at 300 (bytes/second). Also, for your calculation I think you have 180 bytes per input? we could start from here and say ~150 per input, then b64 expansion to ~190, and we're ignoring outputs which is not very accurate (and also ignoring second b64 expansion, let's pretend that's fixed). So I went with 250 here which is probably more realistic, especially at lower numbers of inputs. I think "in the limit" it might be nearer 200. Anyway if we go with 200bytes/input and 60 sec timeout and 300 bytes/s throughput, and we allocate half the time to just sending the transaction we get 45 inputs max. I think it's more like 30 today (because of 2nd base64 expansion). But, just too many variables :) Edit : two things wrong with this - first, you have to measure the total output bytes/sec against the sum of all the output bytes for a transaction, which is (N-1) what's in the transaction, and two, forgot that !tx does not include the signatures, so it's more like 75 bytes for an input, plus output and overhead (small), and two b64 expansions, maybe 140 bytes per input, then x (N-1) whatever N is, and you're starting to get a more realistic sense of the limits. |
I just realised this is not true. wallet.json will store the current index in |
The failure seems be be being caused by something else, so my original theory is not correct. Though transactions do still timeout non the less, the the issue is still valid. It just doesnt cause wallet failure. edit: btw, failure was caused by changes in my c++ code for bitcoin-qt listtransactions. |
There are many cases where a merge_algorithm other then default will choose a large number of inputs. The default taker timeout is 30 seconds, so if the maker does not transmit all the signed inputs within that amount of time, they will be dropped. This leads to a situation where the maker is often failing out of transactions, which causes the gap limit to increase, making coins become unavailable. The yg user can eventually find his coins with wallet-tool using the gap limit option, but no such option is available when running the yg, so the yg will not longer be able to use that wallet.
Possible solution: The options for merge_algorithm are currently default, gradual, greedy, greediest. This could be changed to a max_inputs options which could be set to an integer, and a merge algo that selects inputs based on that number. Perhaps also a merge_more = True, where it will try to select more utxos instead of less.
The text was updated successfully, but these errors were encountered: