-
Notifications
You must be signed in to change notification settings - Fork 795
Conversation
5e850c9
to
28ac213
Compare
28ac213
to
3256978
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
looks good - i think we need a test to ensure it works appropriately, either with a local node or with a large mainnet/testnet query?
/// Returns a stream of logs are loaded in pages of given page size | ||
fn get_logs_paginated<'a>( | ||
&'a self, | ||
filter: &Filter, | ||
page_size: u64, | ||
) -> LogQuery<'a, Self::Provider> { | ||
self.inner().get_logs_paginated(filter, page_size) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@prestwich wdyt about unifying this logic in the default get_logs behavior?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i think it's a pretty reasonable way to smooth over inconsistent RPC paging requirements 🤔
Ah actually i tested with an example seems like i missed committing. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM. Separately, I wonder if we should unify this with get_logs
. Would be a breaking change, requiring to change the function signature to return LogQuery
instead of Vec<Log>
but that could be navigable as it'd basically only introduce an additional await
when iterating over the vector of logs.
Is there any benefit to using get_log
over always using paginated? Probably not?
Yeah, no real benefits of using |
Would give the follow-up a try, I think this is a good enough change that it's worth breaking, and we can include in the changelog |
} | ||
LogQueryState::LoadLogs(fut) => { | ||
let logs = futures_util::ready!(fut.as_mut().poll(ctx)) | ||
.expect("error occurred loading logs"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Panicking on failed RPCs is not very nice. I'd much prefer if this was a TryStream instead. Will submit PR when I get to it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Implemented TryStream
along with other fixes here 0143caf
* feat: add paginated logs * docs: add paginated_logs example * remove unpin
return Poll::Ready(None) | ||
} | ||
// load next page | ||
self.from_block = Some(to_block); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This should be to_block + 1
, otherwise you double-sync blocks at the boundaries.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
will verify this
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@philsippl is correct, will fix this
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Closes #954
Motivation
Solution
Implement a new stream that breaks requested block range into smaller pages that are loaded one at a time and streamed to the user.
Note: this only solves block limitations, if an rpc provider limits to 1000 blocks for instance, you could set page size to 1000 to solve it. However, if the limitation is on the number of logs(infura), this can't solve it automatically or completely, can try smaller page sizes but because number of logs in arbitrary block ranges is also arbitrary it can be hard to decide on a page size.
PR Checklist