-
Notifications
You must be signed in to change notification settings - Fork 360
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
implement protocol 2.0; change db schema #101
base: master
Are you sure you want to change the base?
Conversation
edd105b
to
8840250
Compare
Note: this is ~done; but we probably won't merge it until also implementing the protocol changes in the client, to avoid having to change the protocol again due to some unforeseen issue (though I don't expect issues). |
UPDATE: performance was better on r7g.2xlarge (32GB RAM). rocksdb driver that was used might also have higher RAM reqs than the default leveldb New schema looks good. At first attempt at syncing with this, performance isn't great, unfortunately. On AWS' r6g.large instance (dual-core ARM64 w/ SHA extensions, 16GB RAM) with pypy and rocksdb, current spesmilo:master syncs in under 24 hours, whereas this branch is estimated to take about a week. I don't know why CPU isn't able to max out, nor is disk throughput. It might be some very large pages being used in memory, hard to tell. Anything else you want me to try, e.g. on the same compute configuration, let me know. |
What value of (The cache stores a lot more data for the history dbs now: |
hi, what is the expected timeline to see protocol 1.5 merged in electrumx and implemented in electrum? i have a few wallets which can't load because of its very long transaction history |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks!
Supposedly it makes a difference (see e.g. [0]), and depending on how batching works it makes sense it would, but during a few full syncs of testnet I've done, it was within measurement error. Still, existing code was already doing this. [0]: https://stackoverflow.com/q/54941342
with the pending db changes, an upgrade is ~as fast as a resync from genesis
now that we have our own txindex
This will allow looking up which tx spent an outpoint.
In Bitcoin consensus, a txout index is stored as a uint32_t. However, in practice, an output in a tx uses at least 10 bytes (for an OP_TRUE output), so - to exhaust a 2 byte namespace, a tx would need to have a size of at least 2 ** 16 * 10 = 655 KB, - to exhaust a 3 byte namespace, a tx would need to have a size of at least 2 ** 24 * 10 = 167 MB.
notifications not implemented yet
Similar to scripthash statuses, the height of an unconfirmed tx is: - `-1` if it has any unconfirmed parents, - `0` otherwise.
We already enforce that server.version must be the first received message in a session, but we also need to ensure that the server finishes processing that message and sets up the correct protocol version before starting to process further messages.
History.get_txnums and History.backup depend on ordering of tx_nums, so we want the lexicographical order (used by leveldb comparator) to match the numerical order.
Note that this is a soft fork: the server can apply it even for past protocol versions. Previously, with the order being undefined, if an address had multiple mempool transactions touching it, switching between different servers could result in a change in address status simply as a result of these servers ordering mempool txs differently. This would result in the client re-requesting the whole history of the address. ----- D/i | interface.[electrum.blockstream.info:60002] | <-- ('blockchain.scripthash.subscribe', ['660b44502503064f9d5feee48726287c0973e25bc531b4b8a072f57f143d5cd0']) {} (id: 12) D/i | interface.[electrum.blockstream.info:60002] | --> 9da27f9df91e3f860212f65b736fa20a539ba6e3d509f6370367ee7f10a4d5b0 (id: 12) D/i | interface.[electrum.blockstream.info:60002] | <-- ('blockchain.scripthash.get_history', ['660b44502503064f9d5feee48726287c0973e25bc531b4b8a072f57f143d5cd0']) {} (id: 13) D/i | interface.[electrum.blockstream.info:60002] | --> [ {'fee': 200, 'height': 0, 'tx_hash': '3ee6d6e26291ce360127fe039b816470fce6eeea19b5c9d10829a1e4efc2d0c7'}, {'fee': 239, 'height': 0, 'tx_hash': '9e050f09b676b9b0ee26aa02ccee623fae585a85d6a5e24ecedd6f8d6d2d3b1d'}, {'fee': 178, 'height': 0, 'tx_hash': 'fb80adbf8274190418cb3fb0385d82fe9d47a844d9913684fa5fb3d48094b35a'}, {'fee': 200, 'height': 0, 'tx_hash': '713933c50b7c43f606dad5749ea46e3bc6622657e9b13ace9d639697da266e8b'} ] (id: 13) D/i | interface.[testnet.hsmiths.com:53012] | <-- ('blockchain.scripthash.subscribe', ['660b44502503064f9d5feee48726287c0973e25bc531b4b8a072f57f143d5cd0']) {} (id: 12) D/i | interface.[testnet.hsmiths.com:53012] | --> f7ef7237d2d62a3280acae05616200b96ad9dd85fd0473c29152a4a41e05686c (id: 12) D/i | interface.[testnet.hsmiths.com:53012] | <-- ('blockchain.scripthash.get_history', ['660b44502503064f9d5feee48726287c0973e25bc531b4b8a072f57f143d5cd0']) {} (id: 13) D/i | interface.[testnet.hsmiths.com:53012] | --> [ {'tx_hash': '9e050f09b676b9b0ee26aa02ccee623fae585a85d6a5e24ecedd6f8d6d2d3b1d', 'height': 0, 'fee': 239}, {'tx_hash': 'fb80adbf8274190418cb3fb0385d82fe9d47a844d9913684fa5fb3d48094b35a', 'height': 0, 'fee': 178}, {'tx_hash': '3ee6d6e26291ce360127fe039b816470fce6eeea19b5c9d10829a1e4efc2d0c7', 'height': 0, 'fee': 200}, {'tx_hash': '713933c50b7c43f606dad5749ea46e3bc6622657e9b13ace9d639697da266e8b', 'height': 0, 'fee': 200} ] (id: 13)
in a way that works consistently between LevelDB and RocksDB.
Handling of client_statushash and client_height is not yet implemented.
If a client requested the status of a very busy address (with cold cache: no precalc yet), we might have gotten a timeout before we stored any intermediate status calculated. Then, if the client reconnected and requested the same, we would get stuck in this loop and never make progress. With this change, we try to store intermediate hashes sooner, so that even if there is a timeout, if the client reconnects there is less work to be done.
8840250
to
ef830fd
Compare
Re my comments above, I was able to get sync within a couple days testing this on 32GB of RAM instead of 16GB. I don't know if Linux decided to do small-but-frequent swapping or if RockDB is self-limiting, but either way it looks like the new schema just has larger system requirements. |
@SomberNight Electrum 4.2.1 is out; are there any plans regarding when Protocol 1.5 will be added to Electrum? |
For client devs wanting to test the new protocol, here are electrumx servers: Metrics and logs for both servers: https://db2.electrum.justinarthur.com/ It's not automatically pulling on branch changes so ping me on IRC if any of you need me to pull in new changes, a new branch, or start a new sync. I'm using my cloud vendor's spot market, so servers are subject to occasional interruption if my cloud vendor's customers increase their regularly contracted usage. UPDATE: Bitcoin mainnet sync has completed. Took 3d10m |
I'm shutting down the test servers to save money. If anyone is actively developing on Electrum or similar stratum client for protocol 1.5 and needs infrastructure to test against, just ping me on here or as jarthur in #electrum on Libera. |
@JustinTArthur Is this close to be merged? |
Hello @JustinTArthur |
This implements Electrum Protocol 2.0, see #90 (formerly named version 1.5).
Supersedes #80 (see existing comments there).
Includes #109.