You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We read raw bytes from the data file during the pruning/compaction process and use these bytes to clean the output_pos index up.
For each pruned chunk of bytes we attempt to call delete_output_pos but these bytes do not simply represent a commitment, they represent an output_identifier.
The chunk of bytes is actually empty...
It is not actually safe to do this as we can have duplicate outputs in the TXO set if one is spent and one is unspent. And in this scenario it is not safe to clean the output_pos index up based purely on spent outputs.
The text was updated successfully, but these errors were encountered:
Or do something like delete_peers() in p2p store where we iterate over entries in the db based on prefix and remove those that meet some defined criteria (in this case spent outputs).
This isn't really any more "brute force" than how we currently iterate over the entire data file pruning as we go.
We'd basically just do two passes, one to prune the file and then another pass over the output_pos index (which grows with the UTXO set) to check if the entries are still valid and removing them where necessary.
See #2603
We read raw bytes from the data file during the pruning/compaction process and use these bytes to clean the
output_pos
index up.For each pruned chunk of bytes we attempt to call
delete_output_pos
but these bytes do not simply represent a commitment, they represent an output_identifier.The chunk of bytes is actually empty...
It is not actually safe to do this as we can have duplicate outputs in the TXO set if one is spent and one is unspent. And in this scenario it is not safe to clean the
output_pos
index up based purely on spent outputs.The text was updated successfully, but these errors were encountered: