Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ClientRefs, Poller, and Streams #179

Merged
merged 33 commits into from
Mar 4, 2024
Merged

ClientRefs, Poller, and Streams #179

merged 33 commits into from
Mar 4, 2024

Conversation

prestwich
Copy link
Member

Separate RpcClient from RpcClientInner to allow weak and lifetime-borrowed use of RpcClient without holding a reference beyond

Motivation

  • Make it so that long-lived tasks do not permanently hold RpcClients open
  • lay groundwork for block heartbeat and other tasks that live as long as an Arc
  • make it clear which to use in any given moment

Solution

  • Make RpcClient<Arc>

PR Checklist

  • Added Tests
  • Added Documentation
  • Breaking changes

@prestwich prestwich added the enhancement New feature or request label Feb 2, 2024
@prestwich prestwich self-assigned this Feb 2, 2024
@prestwich prestwich changed the title wip: weak and ref clients ClientRefs, Poller, and Streams Feb 3, 2024
@prestwich prestwich force-pushed the prestwich/block-heart branch from a2a7f5d to 0188a78 Compare February 12, 2024 18:19
Comment on lines 75 to 77
// Then try to fill as many blocks as possible
while !known_blocks.contains(&block_number) {
let block = provider.get_block_by_number(block_number, false).await;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

would consider doing a tokio::future::join_all if the gap between block_number and next_yield is >N blocks

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added a TODO comment

Copy link
Member

@gakonst gakonst left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

cool! i think we can disentangle this from @onbjerg's #190 work? any blockers / next steps?

impl WatchConfig {
/// Create a new watch for a transaction.
pub fn new(tx_hash: B256, tx: oneshot::Sender<()>) -> Self {
Self { tx_hash, confirmations: 0, timeout: Default::default(), tx }
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think?

Suggested change
Self { tx_hash, confirmations: 0, timeout: Default::default(), tx }
Self { tx_hash, confirmations: 1, timeout: Default::default(), tx }

crates/rpc-client/src/poller.rs Outdated Show resolved Hide resolved
crates/rpc-client/src/poller.rs Outdated Show resolved Hide resolved
let fut = async move {
let limit = self.limit.unwrap_or(usize::MAX);
for _ in 0..limit {
let client = match self.client.upgrade() {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

do we want to upgrade each time in the loop?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe we want to break if this loop is the only provider left

Comment on lines +14 to +22
/// An [`RpcClient`] in a [`Weak`] reference.
pub type WeakClient<T> = Weak<RpcClientInner<T>>;

/// A borrowed [`RpcClient`].
pub type ClientRef<'a, T> = &'a RpcClientInner<T>;

/// A JSON-RPC client.
#[derive(Debug)]
pub struct RpcClient<T>(Arc<RpcClientInner<T>>);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there any amazing readout for how to think about Weak vs full Arc? https://doc.rust-lang.org/std/sync/struct.Weak.html

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Allows breaking out of task loops when the provider is dropped in addition to the channel dropping, makes sense with broadcast channels ig

Comment on lines +372 to +378
Some(Ok(value)) => match serde_json::from_str(value.get()) {
Ok(item) => return task::Poll::Ready(Some(Ok(item))),
Err(e) => {
trace!(value = value.get(), error = ?e, "Received unexpected value in subscription.");
continue;
}
},
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

awesome

/// stream will attempt to deserialize the notifications and yield the
/// `serde_json::Result` of the deserialization.
#[derive(Debug)]
pub struct SubResultStream<T> {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should these be called SubscriptionStream & SubscriptionStreamResult?

}

/// A stream of notifications from the server, identified by a local ID. This
/// stream may yield unexpected types.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what's an example of an unexpected type?

Comment on lines 151 to 161

/// Reap any timeout
fn reap_timeouts(&mut self) {
let now = std::time::Instant::now();
let to_keep = self.reap_at.split_off(&now);
let to_reap = std::mem::replace(&mut self.reap_at, to_keep);

for tx_hash in to_reap.values() {
self.unconfirmed.remove(tx_hash);
}
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reap is basically "unsubscribe from transactions that have been pending for longer than T"?

/// Handle a new block by checking if any of the transactions we're
/// watching are in it, and if so, notifying the watcher. Also updates
/// the latest block.
fn handle_new_block(&mut self, block: Block, latest: &watch::Sender<Block>) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍

Comment on lines +348 to +349
let http = Http::<Client>::new(url);
let provider = RootProvider::<TmpNetwork, _>::new(RpcClient::new(http, true));
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we should sugar this to Provider::new(url) and have it work for all HTTP/WS/IPC etc.

Comment on lines 358 to 360
let pending_tx = provider.send_transaction(&TxLegacy(tx)).await.expect("failed to send tx");
eprintln!("{pending_tx:?}");
let () = pending_tx.await.expect("failed to await pending tx");
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yay

@@ -552,7 +552,7 @@ impl<'a> TryFrom<&'a String> for Provider<Http<Client>> {
#[cfg(test)]
mod tests {
use crate::{
provider::{Provider, TempProvider},
tmp::{Provider, TempProvider},
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

note we're gonna get out of the tmp stuff once @onbjerg work is done

Comment on lines +143 to +144
#[cfg_attr(target_arch = "wasm32", async_trait::async_trait(?Send))]
#[cfg_attr(not(target_arch = "wasm32"), async_trait::async_trait)]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit should we start replacing these with native async

Copy link
Member

@mattsse mattsse left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this should work,

some nits

crates/providers/src/heart.rs Show resolved Hide resolved
crates/providers/src/heart.rs Outdated Show resolved Hide resolved
crates/providers/src/heart.rs Show resolved Hide resolved
@gakonst
Copy link
Member

gakonst commented Mar 1, 2024

@DaniPopes let's review & get this in if feature complete and proceed with events?

@DaniPopes DaniPopes marked this pull request as ready for review March 4, 2024 10:51
@DaniPopes DaniPopes requested a review from Evalir as a code owner March 4, 2024 10:51
Copy link
Member

@mattsse mattsse left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think I can see how it all fits together.

I have one concern re usage of sleep in select!

otherwise lgtm

crates/providers/src/heart.rs Outdated Show resolved Hide resolved
@DaniPopes DaniPopes merged commit 86027c9 into main Mar 4, 2024
13 of 14 checks passed
@DaniPopes DaniPopes deleted the prestwich/block-heart branch March 4, 2024 13:10
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants