Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement engine_getBlobsV1 #1

Closed
wants to merge 3 commits into from
Closed

Implement engine_getBlobsV1 #1

wants to merge 3 commits into from

Conversation

michaelsproul
Copy link
Owner

@michaelsproul michaelsproul commented May 23, 2024

Implement the new API for fetching blobs from the mempool to improve propagation:

@Rjected
Copy link

Rjected commented Jun 24, 2024

just noting that requesting by commitment or versioned hash will require refactoring our on disk blob store to work with those types

@michaelsproul
Copy link
Owner Author

Thanks for the feedback @Rjected

Maybe the on-disk blob store could keep a mapping from versioned hash to transaction hash(s)? It could build the mapping at startup and keep it in memory? Or it could write the mapping to disk and read/write from it on-demand. My intuition says the in-memory mapping would be easier because there aren't too many blobs, but you'd be a better judge of whether that's feasible.

I'm guessing the DiskFileBlobStore here is what gets used in prod?

/// Retrieves the blob data for the given transaction hash.
#[inline]
fn read_one(&self, tx: B256) -> Result<Option<BlobTransactionSidecar>, BlobStoreError> {
let path = self.blob_disk_file(tx);
let data = {
let _lock = self.file_lock.read();
match fs::read(&path) {
Ok(data) => data,
Err(e) if e.kind() == io::ErrorKind::NotFound => return Ok(None),
Err(e) => {
return Err(BlobStoreError::Other(Box::new(DiskFileBlobStoreError::ReadFile(
tx, path, e,
))))
}
}
};
BlobTransactionSidecar::decode(&mut data.as_slice())
.map(Some)
.map_err(BlobStoreError::DecodeError)
}

@Rjected
Copy link

Rjected commented Jun 25, 2024

Thanks for the feedback @Rjected

Maybe the on-disk blob store could keep a mapping from versioned hash to transaction hash(s)? It could build the mapping at startup and keep it in memory? Or it could write the mapping to disk and read/write from it on-demand. My intuition says the in-memory mapping would be easier because there aren't too many blobs, but you'd be a better judge of whether that's feasible.

I'm guessing the DiskFileBlobStore here is what gets used in prod?

/// Retrieves the blob data for the given transaction hash.
#[inline]
fn read_one(&self, tx: B256) -> Result<Option<BlobTransactionSidecar>, BlobStoreError> {
let path = self.blob_disk_file(tx);
let data = {
let _lock = self.file_lock.read();
match fs::read(&path) {
Ok(data) => data,
Err(e) if e.kind() == io::ErrorKind::NotFound => return Ok(None),
Err(e) => {
return Err(BlobStoreError::Other(Box::new(DiskFileBlobStoreError::ReadFile(
tx, path, e,
))))
}
}
};
BlobTransactionSidecar::decode(&mut data.as_slice())
.map(Some)
.map_err(BlobStoreError::DecodeError)
}

Yeah, the DiskFileBlobStore is used in practice. Here is where the blob is primarily stored in the blob store:
https://github.com/paradigmxyz/reth/blob/6e146e1140e874b88c304fdc3c5ac0e046e66263/crates/transaction-pool/src/pool/mod.rs#L1188-L1191
https://github.com/paradigmxyz/reth/blob/6e146e1140e874b88c304fdc3c5ac0e046e66263/crates/transaction-pool/src/pool/mod.rs#L788-L795

And here is where it is retrieved:
https://github.com/paradigmxyz/reth/blob/6e146e1140e874b88c304fdc3c5ac0e046e66263/crates/transaction-pool/src/pool/mod.rs#L299-L308

Deletion gets slightly more tricky, since we use BlobStoreUpdates::Finalized to let the blob store know it should remove a set of blob files.

Replacement also complicates things, we would need to surface the versioned hash there:
https://github.com/paradigmxyz/reth/blob/6e146e1140e874b88c304fdc3c5ac0e046e66263/crates/transaction-pool/src/pool/mod.rs#L460-L463

So yeah, seems like a lot of things to refactor if we wanted to change the key to versioned hash entirely, although it might make sense. The in memory mapping does seem like the quickest way to implement this

@mattsse
Copy link

mattsse commented Jul 8, 2024

another approach that could work for the request, is
fetch all 4844 transactions from the pool, this shouldn't be too expensive because Vec<Arc and then simply try finding the matching tx and finally fetch the blob by txhash

@michaelsproul michaelsproul changed the base branch from main to cargo-update July 22, 2024 12:16
@michaelsproul michaelsproul changed the base branch from cargo-update to main July 22, 2024 12:16
@michaelsproul
Copy link
Owner Author

@mattsse I've updated the prototype to follow the latest spec by doing something similar with the in-memory blob cache (just iterate the full cache). I think the main inefficiency is probably recalculating the versioned hash, but that could be optimised out

@@ -415,7 +415,7 @@ impl Transaction {

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I need to revert the fmt changes to this file

@Rjected
Copy link

Rjected commented Jul 22, 2024

@michaelsproul do you mind opening this against the main reth repo? It will be easier for the team to track

@michaelsproul
Copy link
Owner Author

Good idea. Closing in favour of:

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants