-
Notifications
You must be signed in to change notification settings - Fork 94
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(utxo): prioritize electrum connections #1966
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for your effort!
The PR seems to be in its early stages right now. This is not an actual review, I just did a quick review to point out some areas that should not be forgotten.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since this is still in progress, I left some comments that can help you improve the code.
I will do a more thorough review when this is ready for review!
Please also merge with latest dev and start adding doc comments.
mm2src/coins/utxo/rpc_clients.rs
Outdated
pub struct ElectrumClientImpl { | ||
weak_self: Mutex<Option<Weak<ElectrumClientImpl>>>, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a very bad design choice, the struct should never reference it self to not create any unstable behavior.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why not just introduce a new struct ? e.g
struct ElectrumClientImplWeak(Mutex<Option<Weak<ElectrumClientImpl>>>)
mm2src/coins/utxo/rpc_clients.rs
Outdated
struct ConnMng(Arc<ConnMngImpl>); | ||
|
||
impl ConnMng { | ||
async fn suspend_server(self, address: String) -> Result<(), String> { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since this is a new implementation, please start using specific error types and MmError
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
mm2src/coins/utxo/rpc_clients.rs
Outdated
spawner.spawn(async move { | ||
let state = conn_mng.0.guarded.lock().await; | ||
state.connecting.store(false, AtomicOrdering::Relaxed); | ||
}) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you please elaborate on this more? ConnectingStateCtx
will be considered dropped before connecting is set to false if the lock was held by other operation for sometime.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Certainly should have been more detailed! Thanks for pointing this out! This place is the heart of selective connectivity.
mm2src/coins/utxo/rpc_clients.rs
Outdated
let address: String = { | ||
if address.is_some() { | ||
address.map(|internal| internal.to_string()) | ||
} else { | ||
guard.active.as_ref().cloned() | ||
} | ||
}?; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
let address: String = { | |
if address.is_some() { | |
address.map(|internal| internal.to_string()) | |
} else { | |
guard.active.as_ref().cloned() | |
} | |
}?; | |
let address = address.map_or_else(|| guard.active.as_ref().cloned(), |internal| Some(internal.to_string()))?; |
38adc56
to
cfbeea0
Compare
… it" This reverts commit decacb8.
fd02edc
to
00b71c3
Compare
this carried over from the usage of these fields in past impl. there is no reason to have these mutexes async. sync mutexs are better for performance. one mutex is left async though (establishing_connection) to not cpu-block other threads when they are waiting on the same connection to be established.
as a speed up
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
all issues from my previous comments are fixed
"internet connection lost" case tested too... all connection were reestablished after internet came back
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Really great work! last review notes from me
mm2src/coins/utxo/rpc_clients/electrum_rpc/connection_manager/manager.rs
Outdated
Show resolved
Hide resolved
mm2src/coins/utxo/rpc_clients/electrum_rpc/connection_manager/manager.rs
Show resolved
Hide resolved
and avoid lots of conversions
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
are the changes in adex_cli required? @mariocynicys ... otherwise this looks good to go! 🚀
im adding the min & max connected there just in case if we re-use it again the thing works as expected. |
f04e9f4
to
24a64e3
Compare
* dev: fix(legacy-swap): check for confirmations on recover taker (#2242) fix(legacy-swap): remove the need for takers to confirm their payment (#2249) refactor(P2P): types and modules (#2256) fix(evm): correctly display eth addr in iguana v2 activation result (#2254) feat(utxo): prioritize electrum connections (#1966) refactor(SwapOps): make all methods async (#2251) refactor(SwapOps): make `send_maker_payment` async (#2250) remove old p2p implementation (#2248)
* dev: fix(nft): add token_id field to the tx history primary key, fix balance (#2209) feat(cosmos): support IBC types in tx history implementation (#2245) fix(hd-wallet): use `CoinBalanceMap` for UTXO and QTUM (#2259) fix(tests): add more sepolia endpoints in tests (#2262) fix(legacy-swap): check for confirmations on recover taker (#2242) fix(legacy-swap): remove the need for takers to confirm their payment (#2249) refactor(P2P): types and modules (#2256) fix(evm): correctly display eth addr in iguana v2 activation result (#2254) feat(utxo): prioritize electrum connections (#1966) refactor(SwapOps): make all methods async (#2251) refactor(SwapOps): make `send_maker_payment` async (#2250) remove old p2p implementation (#2248)
* dev: fix(foot-shooting): remove leftover code that panics via RPC (#2270) refactor(MarketCoinOps): make `wait_for_htlc_tx_spend` async (#2265) feat(eth-swap): maker tpu v2 implementation (#2211) fix(nft): add token_id field to the tx history primary key, fix balance (#2209) feat(cosmos): support IBC types in tx history implementation (#2245) fix(hd-wallet): use `CoinBalanceMap` for UTXO and QTUM (#2259) fix(tests): add more sepolia endpoints in tests (#2262) fix(legacy-swap): check for confirmations on recover taker (#2242) fix(legacy-swap): remove the need for takers to confirm their payment (#2249) refactor(P2P): types and modules (#2256) fix(evm): correctly display eth addr in iguana v2 activation result (#2254) feat(utxo): prioritize electrum connections (#1966) refactor(SwapOps): make all methods async (#2251) refactor(SwapOps): make `send_maker_payment` async (#2250) remove old p2p implementation (#2248) feat(cosmos-offline-tests): prepare IBC channels inside the container (#2246)
This PR refactors the electrum part of the RPC clients. And also introduces a new feature related to connection management decisions.
Within the electrum enable request, one could now specify two parameters (
min_connected
&max_connected
).min_connected
: KDF needs to keep at leastmin_connected
working active (maintained) connections at all times.max_connected
: KDF needs to keep at mostmax_connected
working active (maintained) connections at any time.selective
/single-server effect set these parameters to (1, 1)multiple
/all-servers effect, set these parameters to (1, len(servers)) (or don't set at all, as this is used as the default)len(servers)
connections and keep them maintained. As some connections get lost for whatever reason, they will not be retried right away. Retries only take place:min_connected
, which is1
in this case.10
servers at all times, and if we ever fall below10
we fire another round of reconnections. After this round we are guaranteed to have <=68
connections.10
connections now, we lose1
and have9
now. A round of reconnections start and we end up with some number of active connections between9 (assuming only our 9 connected servers were the non-faulty ones)
and68
. If after the reconnections round we still fall below10
=min_connected
connections, we keep retrying.0 < min_connected <= max_connected
<= len(servers)
The connection manager maintains these maintained (working active) connections and these are the ones used for any electrum request (they are all used concurrently and whichever returns the success result first is considered the response for this request).
For requests directed for specific servers (like server_version, get_block_count_from, etc...), the connections to these servers are established first if they are not in the maintained servers set and then the request is performed. So even if the server isn't maintained/active, it could still be queried.
The servers order in the electrum enable request encode an implicit priority (previously in this PR, was encoded as
primary
&secondary
). Servers that appear first in the list are assumed to be more robust and thus have higher priority compared to servers after.Since we do an altruistic round of retries every 10 minutes, if we ever had the case were a high priority server was down and thus wasn't maintained by us and now is back online, we will prefer maintaining it over another already-maintained low priority server (this is assuming the maintained servers set is full, i.e.
len(maintained server) == min_connected
). Simply put, high priority servers will kick out low priority ones when they come back online if we have no room to fit both.Testing
Without any changes to the former request format, the multiple effect takes place (i.e. KDF will try to connect to all servers and keep these connections maintained).
In addition to the former request format there are two new fields,
min_connected
&max_connected
mentioned above.example request:
response:
Addresses #791