sync clarity and eliminate batch vouts.spend_tx_row_id update #1840
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This PR has been running on tip.dcrdata.org, the hidden service, testnet, and one mainnet backend.
This contains a few distinct improvements, all related to DB or sync and are thus stacked commits:
Beginning SYNC STAGE 1 of 6 (block data import)...
the vouts.spend_tx_row_id
column is eliminated and is done on-the-fly always, just like normal during operation. The mega query that did this update (seeupdateSpendTxInfoInAllVouts
) during the initial sync step previously was surprisingly costly on slower drives. On an SSD, it would be less than 30 minutes, but on a HDD it ran for 12+ hours (not acceptable). Doing the update on the fly even for the initial block data import is acceptable since it updates with a condition on the primary key of the vouts table, and before the index onspend_tx_row_id
is created. On my machine this increased the stage 1 time from 155 to about 165 minutes. This increase is likely to be higher on spinning disks, but it is more tolerable than an impractically large query at the end.StoreBlock
viaFreshenAddressCaches
. Previously, the DB layer would consider cached data as expired if the block returned by a cache query was less than the best block. But this had the effect of invalidating the entire cache when a new block was recorded, even if there were no transactions that would actually invalidate a cache item for a certain address. This is particularly important for keeping the legacy treasury entry valid since it is not always updated each block any more. Reorg and block disapproval are also rigged to evict affected addresses.DecodeAddress
and redirect to the address page. Don't use thesearchrawtransactions
RPC or theAddressHistory
DB method.'0'
) has never gotten close to happening for a random txid. The most ever is 0000031b3776cfb6ea658198a89e96a83abfc72a401552dee6fdc4e26d30f3f1. Would someone have any interest in hiding a tx from the search function by brute-forcing some element of the raw tx like an output amount? It would still be viewable on the /tx page or the containing /block page.This also updates the README to require an SSD for the postgres process.