Skip to content
This repository has been archived by the owner on May 13, 2022. It is now read-only.

merge_algorithm causes failed transactions #446

Closed
raedah opened this issue Mar 4, 2016 · 6 comments
Closed

merge_algorithm causes failed transactions #446

raedah opened this issue Mar 4, 2016 · 6 comments

Comments

@raedah
Copy link
Contributor

raedah commented Mar 4, 2016

There are many cases where a merge_algorithm other then default will choose a large number of inputs. The default taker timeout is 30 seconds, so if the maker does not transmit all the signed inputs within that amount of time, they will be dropped. This leads to a situation where the maker is often failing out of transactions, which causes the gap limit to increase, making coins become unavailable. The yg user can eventually find his coins with wallet-tool using the gap limit option, but no such option is available when running the yg, so the yg will not longer be able to use that wallet.

Possible solution: The options for merge_algorithm are currently default, gradual, greedy, greediest. This could be changed to a max_inputs options which could be set to an integer, and a merge algo that selects inputs based on that number. Perhaps also a merge_more = True, where it will try to select more utxos instead of less.

@chris-belcher
Copy link
Collaborator

Agreed. The current protocol is unable to handle large numbers of UTXOs and that causes all kinds of problems. Linking to #171.

@adlai
Copy link
Contributor

adlai commented Mar 5, 2016

@raedah I don't think max_inputs is a good target for direct configuration. Shouldn't we calculate that based on the timeout parameter we expect takers to configure, and our own rate limiting?

Quoting from IRC:

23:28:14       belcher | an irc message can be 512 bytes long including the final \r\n according to the irc protocol RFC                         
23:28:28       waxwing | like, 5 messages max in 30 seconds, so 2000 bytes max in 30 seconds                                                     
23:28:30       waxwing | oh ok                                                                                                                   
23:28:43       waxwing | we have basically 3000 max in 10 seconds in the current irc.py                                                          
23:32:32       belcher | so our 3 bytes per second vs xchat's 67 bytes per second                                                                
23:32:46       waxwing | this is why i'm keenly looking at alternatives that might allow bursty traffic and not have censorship or cpof ..easy :)
23:33:29       waxwing | no 300 per second for us                                                                                                
23:33:29       waxwing | why 67?                                                                                                                 
23:33:29       belcher | oh right, der                                                                                                           
23:33:30       belcher | 2000 / 30 = 66.666666 bytes/sec                                                                                         
23:33:49       waxwing | oh yeah ofc, der here too :)                                                                                            
23:34:16       waxwing | but i just made that up because it looked plausible from the settings, i don't know                                     
23:34:21       belcher | made what up ?                                                                                                          
23:34:24       waxwing | anyway it's a client so there's that too                                                                                
23:34:40       waxwing | i "made up" the interpretation of what i saw in flood_msg_time and flood_msg_num                                        
23:34:53       belcher | right ok                                                                                                                
23:34:58       waxwing | plus i used 400 bytes per line. just a bunch of assumptions.                                                            

If the default maker timeout is currently 60 (or is it 30?), the current rate of 66 bytes per second suggests limiting input size to (/ (* 30 66) 180) = 11.

Aside: playing around with input-output linking for CJHunt suggests to me that including more inputs (ideally also multiple change addresses, but let's cross one bridge at a time) causes algorithmic complexity headaches for analysts.

@raedah
Copy link
Contributor Author

raedah commented Mar 5, 2016

@adlai taker timeouts issue here #426

@AdamISZ
Copy link
Member

AdamISZ commented Mar 5, 2016

@adlai What do you mean by "the current rate of 66 bytes per second"? If you were referring to the 67 in that chat excerpt, it's not relevant, that was an estimate for xchat (that was completely wrong, @chris-belcher worked it out from the source ,it seems to be 120). Meanwhile the current JM throttling code is at 300 (bytes/second).

Also, for your calculation I think you have 180 bytes per input? we could start from here and say ~150 per input, then b64 expansion to ~190, and we're ignoring outputs which is not very accurate (and also ignoring second b64 expansion, let's pretend that's fixed). So I went with 250 here which is probably more realistic, especially at lower numbers of inputs. I think "in the limit" it might be nearer 200.

Anyway if we go with 200bytes/input and 60 sec timeout and 300 bytes/s throughput, and we allocate half the time to just sending the transaction we get 45 inputs max. I think it's more like 30 today (because of 2nd base64 expansion). But, just too many variables :)

Edit : two things wrong with this - first, you have to measure the total output bytes/sec against the sum of all the output bytes for a transaction, which is (N-1) what's in the transaction, and two, forgot that !tx does not include the signatures, so it's more like 75 bytes for an input, plus output and overhead (small), and two b64 expansions, maybe 140 bytes per input, then x (N-1) whatever N is, and you're starting to get a more realistic sense of the limits.

@chris-belcher
Copy link
Collaborator

This leads to a situation where the maker is often failing out of transactions, which causes the gap limit to increase, making coins become unavailable. The yg user can eventually find his coins with wallet-tool using the gap limit option, but no such option is available when running the yg, so the yg will not longer be able to use that wallet.

I just realised this is not true.

wallet.json will store the current index in index_cache. You only need to play with gap limits when restoring from seed.

@raedah
Copy link
Contributor Author

raedah commented Mar 5, 2016

The failure seems be be being caused by something else, so my original theory is not correct. Though transactions do still timeout non the less, the the issue is still valid. It just doesnt cause wallet failure.

edit: btw, failure was caused by changes in my c++ code for bitcoin-qt listtransactions.

@raedah raedah changed the title merge_algorithm causes wallet failure merge_algorithm causes failed transactions Mar 5, 2016
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

4 participants