-
Notifications
You must be signed in to change notification settings - Fork 107
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Make sync, inbound, and block verifier check if a block hash is in any chain or any queue #862
Comments
What's the advantage of having a separate request type for this, compared to checking whether the returned depth is |
Depth accesses the height to hash map. I don't know if it's cheaper to check if a hash is present in the hash to block map, but I can't imagine it matters that much, as long as we don't actually retrieve the block itself. So I think moving this wrapper function to the state service would be the best choice here: (See also #865, where we remove as many of these state checks as we can.) |
@yaahc asked in #853 (comment)
The exact depth does not matter, and the interfaces we use should reflect that fact. These race conditions do not matter for this particular caller, but they do exist:
It's also unclear what depth means for a block that's on a side-chain. In this ticket, we resolve these issues by:
And it's clear that modules which use this new interface don't rely on anything else about the block or state. |
To be more precise, there's not exactly a race condition, just the potential for TOCTOU issues where another part of the software requests information about what's in the state, acts on it, and the state is updated in the meantime. But this is the case for any state query, and I think that the solution is the direction we're already going (where all state updates are permissioned by the state itself, which can do synchronous checks). So I don't see the difference between returning the depth or not, in either case the information returned by the state can become stale. It seems like the only problem is that you want to check whether a block hash is in any chain, while the current API checks whether a block hash is in the best chain. |
Functionally, that's the only change - but I think a specific function for checking a hash is also useful for readability and modularity. (And potentially future optimisations.) It's also worth noting the two different kinds of TOCTOU issues:
|
This seems like a reasonable default for block lookups.
😅 |
Is this something that needs to be done as part of #2224 ? |
This might be needed to fix syncing bugs. |
This is a real bug, but it doesn't seem to cause that many problems in practice. |
@arya2 I think you mentioned seeing this issue in the block verifier as well? Can you populate some of that context here? Gracias |
As part of resolving this we should remember to update the related zebra/zebrad/src/components/sync.rs Line 961 in 5a88fe7
|
The issue highlighted by the audit was that this issue was closed while there was still a So, by re-opening this issue, we have already "fixed" the issue highlighted by the audit (#6281). The next question is: how important is the actual issue to fix right now? |
We handled this issue by checking for blocks that are already in side-chains in PR #6335. We closed PR #6397 because it didn't work, and we'd run out of time to fix it. The fix is optional because duplicate queued blocks are already handled in the syncer and inbound downloader, we just don't handle duplicates across both of them. Which is ok for now. |
Motivation
In the sync service (#853), inbound downloader, and block verifier, we are using the GetDepth request to check if a block is present in the state:
https://github.com/ZcashFoundation/zebra/pull/853/files#r467684617
But the sync, inbound, and block verifier actually need to know if a block hash is present in any chain. (This is a bug.)
They also don't need a depth, or any other extra data.
Scheduling
This risk is acceptable for the stable release, but we need to fix it before we support lightwalletd.
We should also fix this bug if Zebra continues to hang, after we fix known hang bugs.
Solution
Each section should be implemented in a separate PR.
A. Add a new "contains" block hash request:
HashSet
of block hashes, to reduce the number of state requestsB. We want to do these checks:
For example, see this code:
zebra/zebra-state/src/service.rs
Lines 99 to 104 in ebe1c9f
D. Check if the blocks are already waiting in a checkpoint verifier queue:
E. De-duplicate the
sync
andinbound
block downloaders:enum
to distinguishsync
andinbound
requestsinbound
limits tosync
blocksAlternatives
We could implement the new request as a wrapper function around a more specific query, which discards any extra data. But there aren't any requests that find blocks in any chain.
We might also want to return success on duplicate blocks, rather than an error. But sync restarts should be even rarer once we fix this bug.
Related
The state request for this could support more detailed errors for the submitblock method (#5487) if it specifies where the block is in the state (i.e. best chain, side chain, or queued).
The text was updated successfully, but these errors were encountered: