Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Middleware for metrics #576

Merged
merged 28 commits into from
Dec 1, 2021
Merged

Middleware for metrics #576

merged 28 commits into from
Dec 1, 2021

Conversation

maciejhirsz
Copy link
Contributor

@maciejhirsz maciejhirsz commented Nov 23, 2021

Still a bunch of loose ends:

  • Add a method to set the metrics on the server builder.
  • HTTP server side.
  • TODO doc comments.
  • Test / examples.

Other than that it should be functional and is backwards compatible. I needed to add an extra boxed future in async method call pipeline that I may be able to remove down the line (depending on how much I want to mess with the FutureDriver).

}

/// Create a new `MethodSink` with a limited response size
pub fn new_with_limit(tx: mpsc::UnboundedSender<String>, max_response_size: u32) -> Self {
Copy link
Member

@niklasad1 niklasad1 Nov 24, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nice 👍

* Fix try-build tests

* Add a middleware setter and an example

* Actually add the example

* Grumbles

* Use an atomic

* Set middleware with a constructor instead

* Resolve a todo

* Update ws-server/src/server.rs

Co-authored-by: Maciej Hirsz <[email protected]>

* Update ws-server/src/server.rs

Co-authored-by: Maciej Hirsz <[email protected]>

* Update ws-server/src/server.rs

Co-authored-by: Maciej Hirsz <[email protected]>

Co-authored-by: Maciej Hirsz <[email protected]>
examples/middleware.rs Outdated Show resolved Hide resolved
Copy link
Member

@niklasad1 niklasad1 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I like this, it's just matter of adding the remaining code

@@ -76,6 +76,10 @@ pub use jsonrpsee_types as types;
#[cfg(any(feature = "http-server", feature = "ws-server"))]
pub use jsonrpsee_utils::server::rpc_module::{RpcModule, SubscriptionSink};

/// TODO: (dp) any reason not to export this? narrow the scope to `jsonrpsee_utils::server`?
#[cfg(any(feature = "http-server", feature = "ws-server"))]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you can remove these feature guards, the utils by itself have conditional compilation that should be sufficient but I think you have to change utils to be a non-optional dependency for that to work.

examples/middleware.rs Outdated Show resolved Hide resolved
Comment on lines 31 to 33
/// Intended to carry timestamp of a request, for example `std::time::Instant`. How the middleware
/// measures time, if at all, is entirely up to the implementation.
type Instant: Send + Copy;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wonder whether Instant is the right name; I'd rather assume very little about how it will be used and call it something like OnRequest perhaps. In the example, for instance, it's used as a counter, which seems like a perfectly reasonable use case too :)

I also wonder about allowing Clone rather than Copy; would this have any performance implications if the type is copyable anyway? It would perhaps open up some additional use cases. (on the other hand, we can relax restrictions later, and there aren't many use cases for an on_request that knows nothing about the request anyway, so I don't mind either way really!)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Naming this one is hard, like really hard. Maybe it should be X to communicate its "whatever"-nature... (only half-joking).
Or we could take the opposite stance and remove it and force std::time::Instant – after all we're not committing to this being the final shape of middleware; all we need is enough to cover substrate's needs.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How about Track? type Track = std::time::Instant reads alright; type Track = usize is ok-ish. It probably violates all kinds of API guidelines, but this one is a bit special. :/

}

impl<A, B> Middleware for (A, B)
where
Copy link
Collaborator

@jsdw jsdw Nov 25, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it worth a macro impl to cover tuples up to some certain size (just for better ergonomics for >2 middlewares)?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Probably, but not until we know we need it imo. :)

dvdplm and others added 7 commits November 29, 2021 14:23
* Add an example of adding multiple middlewares.

* Update examples/multi-middleware.rs

Co-authored-by: Maciej Hirsz <[email protected]>

* Update examples/Cargo.toml

Co-authored-by: Maciej Hirsz <[email protected]>

Co-authored-by: Maciej Hirsz <[email protected]>
* Move `Middleware` to jsonrpsee-types

* Move Middleware trait to jsonrpsee-types

* Add some docs.
ws-server/src/server.rs Outdated Show resolved Hide resolved
@maciejhirsz maciejhirsz marked this pull request as ready for review November 30, 2021 14:33
@maciejhirsz maciejhirsz requested a review from a team as a code owner November 30, 2021 14:33
@jsdw
Copy link
Collaborator

jsdw commented Nov 30, 2021

A strange thing from me; on my M1 mac, when I run cargo clean and then cargo check in the root folder, I run into:

% cargo check
    Checking jsonrpsee-types v0.5.1 (/Users/james/Work/jsonrpsee/types)
    Checking jsonrpsee-utils v0.5.1 (/Users/james/Work/jsonrpsee/utils)
    Checking jsonrpsee-ws-client v0.5.1 (/Users/james/Work/jsonrpsee/ws-client)
    Checking jsonrpsee-ws-server v0.5.1 (/Users/james/Work/jsonrpsee/ws-server)
    Checking jsonrpsee-http-client v0.5.1 (/Users/james/Work/jsonrpsee/http-client)
    Checking jsonrpsee-http-server v0.5.1 (/Users/james/Work/jsonrpsee/http-server)
error[E0433]: failed to resolve: could not find `MissedTickBehavior` in `time`
  --> ws-server/src/future.rs:59:44
   |
59 |         heartbeat.set_missed_tick_behavior(time::MissedTickBehavior::Skip);
   |                                                  ^^^^^^^^^^^^^^^^^^ could not find `MissedTickBehavior` in `time`

error[E0599]: no method named `set_missed_tick_behavior` found for struct `Interval` in the current scope
  --> ws-server/src/future.rs:59:13
   |
59 |         heartbeat.set_missed_tick_behavior(time::MissedTickBehavior::Skip);
   |                   ^^^^^^^^^^^^^^^^^^^^^^^^ method not found in `Interval`

Some errors have detailed explanations: E0433, E0599.
For more information about an error, try `rustc --explain E0433`.
error: could not compile `jsonrpsee-ws-server` due to 2 previous errors

Does anybody else run into this?

Running latest stable rust: rustc 1.56.1 (59eed8a2a 2021-11-01)

Edit: I can see that the "time" feature is enabled for that crate, and so that enum should def exist. I guess for some reason it's being ignored when I run cargo check..

Edit 2: Ah, I can see in my Cargo.lock that it's resolved to using tokio 1.6.1, which does not have that enum. I'd suggest that we are a little more specific about the version of tokio supported (Cargo.toml just asks for "1") :)

Edit 3: It looks like tokio 1.8.0 is the first to contain the MissedTickBehaviour stuff.

@niklasad1
Copy link
Member

niklasad1 commented Nov 30, 2021

Edit 3: It looks like tokio 1.8.0 is the first to contain the MissedTickBehaviour stuff.

nice catch, let's pin tokio to 1.8 then :)

};
return self.send_error(id, err);
} else {
return self.send_error(id, ErrorCode::InternalError.into());
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I assume that the max size error is an error based on user input being too large?

I wonder whether the tracing::error!("Error serializing response: {:?}", err); line should instead be above this InternalError bit, so we only get output for the unexpected errors?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It means that output from an executed call exceeded the max limit but the "request" itself was below the limit.
Thus, it depends what you mean with the user but the registered callback created a response bigger than the max limit.

Yes, or downgrade the error above to warn or something but the InternalError "should" be infallible/bug IIRC.

}
};

middleware.on_disconnect();

// Drive all running methods to completion
method_executors.await;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just a note that we should perhaps add a comment or something to make it clear to our future selves that we souldn't return early before this await runs?

Copy link
Collaborator

@jsdw jsdw left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looking great to me :) Love the examples and doc comments. Just a couple of small notes (the mains things being ensuring tokio ^1.8 is used, and a comment about early returning before that await if that seems sensible)

rx_batch.close();
let results = collect_batch_response(rx_batch).await;

if let Err(err) = sink.send_raw(results) {
tracing::error!("Error sending batch response to the client: {:?}", err)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hmm, if the send_raw will only fail once the server is closed, is that why we don't register on_response when that fails?

otherwise, we seem to register failed responses too.

Copy link
Member

@niklasad1 niklasad1 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, clean

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants