-
Notifications
You must be signed in to change notification settings - Fork 119
Conversation
Nice that someone finally did this. For criticism:
|
@chris-belcher thanks for those suggestions; I will look into the thread communication now. As for the kilobytes per second, yes that feels like it would be more likely to address whatever the rules for flooding are, but it would be great to be able to pin them down. Whenever I start trawling the web for IRC server configuration on flooding I just end up confused :) |
Updated in line with the above. See the commit notes. If this construction makes sense, I am left rather in the dark as to what the right settings are for B_PER_SECOND, B_PER_SECOND_INTERVAL and MSG_INTERVAL should be. I have gone with 0.3s for minimum delay between sending lines, and 200 bytes/sec over a 10 second period for the latter. In this configuration if B_PER_SECOND_INTERVAL is too big, it will cause timeout failures in most cases, because if the second throttling limit is hit (too many bytes per second), it waits half of that interval. So as currently configured it waits for 5 seconds before starting to send messages again. No idea what any of these numbers should actually be :) The extra code in taker.py is to address #185 ; I was reminded of it by one of the experiments, where cjtx had not completely constructed before calling on_pubkey, leading to a crash (this doesn't happen in this version, but one of my earlier tries). I could see no reason not to add those checks, so I went ahead with it here. The revised version still passes regtest.py and tumbler-test.py |
Fixed up a couple of errors, build should pass now. |
Looks OK. BTW, which IRC client do you use for everyday IRCing ? Could be worth checking how the throttling on it works, by pasting in a paragraph of text and sending. |
So you mean try that out on cyberguerrilla, to figure out empirically? I guess that could be an approach. |
Or just use the default rate that your client uses. It doesnt matter too much if a rate is chosen that's a bit too slow. |
I see, I understand the suggestion now. I'll take a look at xchat. I agree that being conservative makes sense, that's what I tried to do so far, but at some point it will bump up against joinmarket internal timeouts. |
Xchat has these in the config file: flood_msg_num = 5 From rooting around in the xchat source and displaying the throttling on the GUI, it looks like this is the principle throttle mechanism, although I'm not sure there isn't a bytes/sec also. (May be implied because they limit line length). This seems too conservative for us; the throttle slows down both sides due to the interactivity of the protocol, so !fill !pubkey !auth !ioauth !tx (x 2 or 3) !sig (x 2 or 3) replicated over several counterparties could end up taking quite a while. I've tested tumbler with 1 msg per second and 100 bytes per second over 10 seconds (which corresponds to a maximum of a little over 2 lines per 10 seconds for full ~400 byte lines) as a .. wild stab in the dark :) Testing that, it works fine, except maker_timeout_sec = 30 is too low in this case. Edited to 60 and it ran fine. Thoughts welcome. |
Tweaked in line with the above; edited maker_timeout_sec default to 60s also, as mentioned. Done a bit more testing including on testnet. I am happy to merge this now, will wait a day or so for any comments. |
Does it really take more than 30 seconds from start to end of a coinjoin? That seems like it would lead to a bad UX when joinmarket is integrated into wallets. I'd think the only slowing down in practice would end up being when taker sends the unsigned tx to all makers. |
Yes, I pushed it a bit far with those numbers, on reflection. So, it's like this: That's obviously not feasible, even the initial handshake would slow down massively. I tried 1 line per 1 second and 1kb per 10 seconds, which is effectively 1 line per 4 or 5 seconds for full lines. I think the main place this gets hit is when the taker is sending the !tx messages, although it might happen also with !sig. So given our earlier experience in live operation, it does indeed seem that this is much too aggressive on the kB/s limit. I'll bump it up to 3kB per 10 seconds which, when lines are full, will throttle down to 300 bytes per sec, again ~ 1 line per second. And I'll revert the maker_timeout_sec change. I'm running this on mainnet as of last night. Had one tx already. |
Final tweaks: I thought about it a while, and decided: maker_timeout_sec is best being a generous 60 seconds, not 30. As long as we are on a rate-limited IRC server it's worth having the default create a better chance for large transactions to go through. The code as it stands does pass testing with 30 seconds, and since this is on the taker side, it can always be changed by the taker (doesn't need the population of makers to agree with it). As for cases of wanting high speed throughput, they can be addressed with specific configuration, but ultimately they depend on not using a rate limited communication rendezvous; at least, we need something that accepts bursty traffic. We either need private IRC servers that are configured for our needs, or different messaging platforms. |
Suggestion by @chris-belcher to "stress-test" with a 15 counterparty transaction threw up a fairly serious error with the previous version: PING was not getting immediate priority as intended. So, rewrote significantly: Validated using detailed debugs that the PINGs are getting sent immediately (after 0.2 seconds say), and interrupting the other messages. For a 16 counterparty transaction example, sending ~2300 byte template transactions to all 15 makers in the transaction, thus about 9 lines to each maker, took around 3.5 minutes to complete OK. This means for a taker wishing to do a large transaction, it's advisable to set the maker_timeout_sec config value much higher (say, 1000 seconds instead of 60). regtest and tumbler_test passed OK locally, something wrong with travis, I will check it now. |
581f830
to
749fd2a
Compare
How long does a normal small coinjoin (with 2 makers) take to complete? It shouldnt take any much longer than without the throttle thread since those coinjoins never made the taker flood off. |
Also, in python a slightly more concise way of using locks is the with keyword.
|
It's a little slower, due to the 1 line per second rate limit.
|
d34ec04
to
eb65b2f
Compare
Two throttling mechanisms: a limit to lines per second allowed, and a limit of bytes/second over a recent interval. These settings are currently in the constructor of the ThrottleThread. Change maker_timeout_seconds default to 60 for large txs. Remove sleep hack in sig sending, no longer needed. Minor updates: tumbler test -w wait time reset to 10s, extra travis bot logs. Minor additional update: address issue JoinMarket-Org#185.
Sanity checked a 16 counterparty tranasction again, worked OK in ~ 3.5 minutes with this setting. I believe this is ready to merge now. |
Addressing #300 and #31 to some extent or other.
Edit: superceded by comments below, has changed in some aspects.
Functionality:
PING messages are sent immediately.
All other messages (specifically: lines) are sent no faster than 0.5seconds per line.
Non-orderbook response messages are given priority over orderbook response messages.
Testing:
passes regtest and tumbler-test
added debugs and checked bot logs to ensure that messages are spaced 0.5 seconds apart.
Comments:
This is not intended to be a fully fledged solution, but at least it works, now. Or seems to. Criticism is welcome. The main problem, and it's a big one, is how hard it will be to test if this ameliorates the problem of bots getting kicked when passing a lot of data, and, related, what the right time delay is. The only realistic way I can see of testing this (or any other solution to this problem) is to run a big testnet test on cyberguerrilla, with at least relatively large wallets, and at least 6-10 different bots at once in a pit, to see if we can trigger the kicks. Thoughts on that? Does anyone know what the flooding rules are for the cyberguerrilla server?