-
Notifications
You must be signed in to change notification settings - Fork 173
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[http client]: refactor with "syncronous-like" design #156
Conversation
Improve benchmarks to take concurrent requests into account.
Improve benchmarks to take concurrent requests into account.
@@ -33,7 +33,7 @@ use jsonrpsee::types::jsonrpc::{JsonValue, Params}; | |||
const SOCK_ADDR: &str = "127.0.0.1:9933"; | |||
const SERVER_URI: &str = "http://localhost:9933"; | |||
|
|||
#[async_std::main] | |||
#[tokio::main] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just curious: why switch to Tokio here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
hyper
only works with tokio
, Pierre spawned a background thread around that just to support any runtime.
If we go ahead with this change it should be clearly documented or spawn a background thread for it again.
pub const PARSE_ERROR: &str = "Parse error"; | ||
pub const INTERNAL_ERROR: &str = "Internal error"; | ||
pub const INVALID_PARAMS: &str = "Invalid params"; | ||
pub const INVALID_REQUEST: &str = "Invalid request"; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Note, the specification states: Invalid Request
but we use Invalid request
.
https://github.com/paritytech/jsonrpc/ does the same let's keep like that for now.
Expermential to read number of bytes from `HTTP Content Length` to pre-allocate the number of bytes and bail early if the length is bigger than the `max_request_body size` Need to be benched with bigger requests.
@@ -18,43 +20,42 @@ pub struct HttpTransportClient { | |||
/// HTTP client, | |||
client: hyper::Client<hyper::client::HttpConnector>, | |||
/// Configurable max request body size | |||
max_request_body_size: usize, | |||
max_request_body_size: u32, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
2 GB max size, I changed because I don't like usize because it's implicit i.e. platform dependent.
Could be u64
as well but I think 2 GB should be sufficient some browsers limit to 2GB (e.g, Firefox)
|
||
// Note that we don't check the Content-Type of the request. This is deemed | ||
// unnecessary, as a parsing error while happen anyway. | ||
let mut body = Vec::with_capacity(body_size as usize); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The idea is that this will be faster for big payloads that needs be re-allocated but
I haven't benchmarked this change for bigger payloads because the HttpServer hardcodes the max request size to 16384.
Thus, it's just a guess.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I like with_capacity
even if it turns out it doesn't change performance at all (I suspect it doesn't and that the compiler does the right thing anyway) because it communicates to the reader that we intended to use this for the body before doing it.
benches/benches.rs
Outdated
} | ||
}) | ||
}, | ||
// TODO(niklasad1): this doesn't work on my machine. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
On my machine it seems to block after the first run, looks like this:
synchronous WebSocket round trip
time: [94.082 us 94.526 us 94.977 us]
Found 5 outliers among 100 measurements (5.00%)
1 (1.00%) low mild
3 (3.00%) high mild
1 (1.00%) high severe
concurrent WebSocket round trip/6
time: [226.41 us 226.88 us 227.40 us]
Found 1 outliers among 100 measurements (1.00%)
1 (1.00%) low mild
Benchmarking concurrent WebSocket round trip/12: Warming up for 3.0000 s
It's stuck on the last one. Is that what you see too?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, I can take look what's going on there today.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It looks like a deadlock ☠️, probably some channel that is full ^^
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Found the problem https://github.com/paritytech/jsonrpsee/blob/v2/src/ws/transport.rs#L352, it essentially just allows 8 concurrent tasks to be handled per connection, and if the buffer gets full -> deadlock
* update http example * ungeneric crate * update dependencies * [client]: add WebSocket client again. * [deps]: remove needless dev dependencies * nits: forgot to commit new files * [ws client]: `send_text` instead of `send_binary` This is temporary fix to work with the `server` which assumes that `WebSocket` resonses are `text` * chore: add a bunch of more logging * [ws]: port tests but some are failing. * chore: fmt * [server API]: expose `fn local_addr` as public API. * [tests]: make them less ugly by using `127.0.0.1:0` * fix: a bunch of compiler warnings. * [api]: uniform naming, `bind -> new` in transport. * [websocket server]: reply when deserial fails When the server receives an request with invalid JSON `-32700, Parse error shall be returned` * chore: fmt * [tests]: fix remaining tests * [websocket server]: support `binary` and `text` * [ws server]: fix bug in subscription response. `.await` was missing in RegisteredSubscription::send() and no responses were actually sent which this commit fixes. * [client API]: export `WsSubscription` * [examples]: use `localhost` instead of `127.0.0.1` Hostname is required when using `wss` and `127.0.0.1` is not valid hostname. * [examples]: add subscription example. * chore: fmt * [ws server]: fix bug register new subscription. Fixes newly introduced bug that causes `register_subscription` to have side-effects even if the subscription fails. * fmt * more uniform logs * [ws server]: simple subscription test. * [ws server]: subscription tests improved. * [tests]: extract test helpers to a separate crate (#125) * [ws server]: don't close connection when `deserialization` fails (#131) * [ws server]: don't close conn. when `deser` fails * Update src/ws/transport.rs * grumbles: prefer matching of if else. * chore: CI warn `intra_doc_link_resolution_failure` (#139) Since we have not updated the documentation properly it's annoying that the entire job fails. * chore: rustfmt.toml (#138) * chore: add `rustfmt.toml` for formatting * style: `cargo fmt --all` with new config * [server raw params]: fix debug implementation (#137) * [server]: simply raw params impl Use debug implementation of `common::Params` instead of doing something similar that doesn't work properly. * [raw params]: derive `Debug` impl. * [ws server]: parse subscription ID for unsubscription instead of hardcoding `JsonValue::Null` (#136) * [ws server]: fix broken unsubscribe. Try to parse the subscription ID as the first element of an Array or the `subscription` field of an Object/Map. If both of those fails then regard it as a error. * fmt * fix grumbles: remove space indentation * fix(ws server): sub/unsubscribe to same method should generate an error (#140) * fix(ws server): sub/unsubscribe to same method err Subscribe and unsubscribe to the same method should generate an error, which this commit fixed. This bug was introduced by myself in fc87889 * Update src/ws/server.rs Co-authored-by: David <[email protected]> Co-authored-by: David <[email protected]> * chore: add naive benches for request/response (#142) Co-authored-by: Niklas <[email protected]> * fix(ws server): remove faulty debug_assert (#145) The code assumed that `subscription id` is still in `active_subscriptions` when the connection was dropped. The list of subscriptions (kept in raw server) are not notified when a client dropped its subscription/unsubscribed thus it's possible that the actual subscriptions are closed before the entire client was dropped. * ci(benches): cargo check on benches. (#146) * fix(http client): implement `clone` uniform API. (#147) * chore(deps): update `futures v0.3.7` (#148) * chore(deps): update remaining crates (#149) * chore(deps): update `futures v0.3.7` * chore(deps): bump the rest of deps * Improve HTTP client background thread (#150) * refactor: resultify API + some crate reorg (#144) * [ws client]: resultify API and fix subscribe. * The commit changes the API to return `Err` when it's possible and to not ignore underlying errors. * Fix that `fn subscribe` doesn't accept the subscription and unsubscription to be same which causes errors in the server. * nits: Err::SubscriptionMetod -> Err::Subscription * refactor(client): common error type * refactor(http client): resultify * refactor(common): rename common -> types.. This commit renames the `common module` to `types` and tries to distinguish the types that is directly related to the `JSON-RPC v2 specification` from others. Somethings are a little big sloppy named as naming is hard. Also, as bonus a removed a bunch of needless stuff in http server related to subscription. * Update src/ws/tests.rs * style: cargo fmt * fix(grumble): matches -> assert(matches) * fix(grumbles): `jsonrpc_v2` -> `jsonrpc` * fix(nit): remove unused code. * fix(benches): make it compile again. * style: cargo fmt * fix nits (#151) * fix(ws client): send binary (1 byte less payload) * docs(ws server): fix bad comment. * chore: add `editorconfig` (#152) * chore: make `debug log` less verbose. (#153) * chore: make `debug log` less verbose. The debug logging was just too verbose and this commit simplies it as follows: ``` DEBUG recv: {"jsonrpc":"2.0","method":"<METHOD>","params":<PARAMS>,"id":<ID>} DEBUG send: {"jsonrpc":"2.0","result":"<RESULT>","id":<ID>} ``` * style: cargo fmt * fix: missed logs * [jsonrpc types]: implement Display for Request/Response (#160) * feat(jsonrpc response/request): impl `Display` * refactor(logging): use display impl * use serde_json for verbosity * [http client]: refactor with "syncronous-like" design (#156) * experimental * ci(benches): sync and concurrent roundtrips Improve benchmarks to take concurrent requests into account. * ci(benches): sync and concurrent roundtrips Improve benchmarks to take concurrent requests into account. * fix(nits) * feat(http client): limit max request body size * test(http transport): request limit test * test(http client): add tests. * fix typo * fix(benches): make it compile again. * fix(ws example): revert unintentional change. * test(http client): subscription response on call. * fix(cleanup) * fix(benches): make it compile again. * Update src/client/http/transport.rs * fix(http client): `&str` -> `AsRef<str>` * docs(client types): better docs for Mismatch type. * style: `Default::default` -> `HttpConfig::default` * fix(http client): read body size from header. Expermential to read number of bytes from `HTTP Content Length` to pre-allocate the number of bytes and bail early if the length is bigger than the `max_request_body size` Need to be benched with bigger requests. * test(raw http): enable tests to works again. * style: cargo fmt * benches: address grumbles * feat(jsonrpc response/request): impl `Display` * refactor(logging): use display impl * fix(http client): nits. * Update benches/benches.rs * fix bad merge. * chore(deps): update dependencies. (#164) * feat(http server): configurable request body limit (#162) * feat(http server): configurable request body limit * refactor(crate reorg): to have shared http helpers. * Merge client and server errors. * Move `http_server_utils` to `utils/http` * Minor cleanup * fix nits * fix(hyper helper): u64 -> u32 * Update src/utils/http/hyper_helpers.rs Co-authored-by: David <[email protected]> * Update src/utils/http/hyper_helpers.rs Co-authored-by: David <[email protected]> * fix: grumbles * Update src/utils/http/hyper_helpers.rs Co-authored-by: David <[email protected]> * Update src/http/server.rs Co-authored-by: David <[email protected]> Co-authored-by: David <[email protected]> * ci: remove nightly (#167) Use stabilized `broken_intra_doc_links` instead of `intra_doc_link_resolution_failure` * fix(websocket client): drop subscriptions that can't keep up with the internal buffer size (#166) * fix(ws client): drop subscriptions when full. This commit changes the behavior in the `WebSocket Client` where each subscription channel is used in a non-blocking matter until it is determined as full or disconnected. When that occurs the channel is simply dropped and when the user `poll` the subscription it will return all sent subscriptions before it was and terminate (return None) once it's polled one last time. Similarly as `Streams` works in Rust. It also adds configuration for the `WebSocket Client` to configure capacity for the different internal channels to avoid filling the buffers when it's not expected. * tests(ws client): simple subscription test. * fix: nits * Update src/client/ws/client.rs * refactor(tests): introduce integration_tests Make the repo structure more understable w.r.t testing. * chore(license): add missing license headers * Update src/client/ws/client.rs * Update src/client/ws/client.rs * style: remove unintended spaces. * tests: add concurrent deadlock test Ensure that if more than the requested channel buffer capacity is exceeded it should not deadlock. Such as spawning alot of concurrent requests, notifications or new subscriptions. * Update src/client/ws/client.rs * fix: review grumbles * fix nits: `remove needless closure` * fix: cargo fmt * Update src/client/ws/client.rs Co-authored-by: David <[email protected]> * fix more nits Co-authored-by: David <[email protected]> * fix(ws client): embed request id in `SubscriptionClosed` (#170) * fix(ws client): embed request id SubscriptClosed Fixes #169 * Update src/client/ws/client.rs * Update src/client/ws/client.rs Co-authored-by: David <[email protected]> * Update src/client/ws/client.rs Co-authored-by: David <[email protected]> Co-authored-by: David <[email protected]> * chore(deps): bump dependencies (#172) * [ws client]: add tests (#134) * [test utils]: add `internal_err` and consts [errors]: unify client/server errors [test utils]: fake WebSocket jsonrpc server [ws client]: export errors [ws client]: add some basic tests * fmt * remove log target * fix nits * [ws client]: add subscription test * revert unintendend changes. * fmt * [ws client]: fix panic in tests * cleanup * tests(ws client): test for invalid request ID. * fix nits * [ws client]: kill raw client (#171) * getting started * WIP WIP * cleanup * cleanup v2 * cleanup v3 * perf: use BufReader BufWriter * fix(request manager): resultify insert API The rationale behind this change is that the `insert_methods` takes ownership of the `send_back_oneshot` and if the operation fails it should be propagated the frontend. So returning the `Err(send_back_oneshot)` if it fails makes it possible. * fix nits * examples(ws): revert changes * Update tests/integration_tests.rs * nits: fix unwraps * Update src/client/ws/manager.rs Co-authored-by: David <[email protected]> * Update src/client/ws/transport.rs Co-authored-by: David <[email protected]> * Update src/client/ws/client.rs Co-authored-by: David <[email protected]> * fix build * refactor: simplify `Error::InvalidRequestId` It was hard to use when the expected id is not known. * fix(ws client): error handling. * fix(grumble error type): better error message. * fix(grumble): docs `JSONRPC WebSocket transport` * fix(ws manager): fix grumbles. * Add better documentation * Rename methods. * Add `proof` to unreachable! * fix(ws manager): fix nit in docs. * fix(grumbles): ws client * fix more nits * fix compile warning: export websocket transports. * Update src/client/ws/manager.rs Co-authored-by: David <[email protected]> * deps: tokio 1.0 and hyper 0.14 (#176) * deps: tokio 1.0 and hyper 0.14 * Update Cargo.toml * refactor: crate re-organization with separate crates (#177) * [ci]: feature `http` and `ws` removed. * refactor: re-org crate with smaller crates. * fmt * [ci]: remove default features Currently there are no features in the crates, so that check is not needed. * [http client]: remove unused dependency tokio * docs(http client): fix nits tokio 0.2 -> tokio 1.0 (#178) * docs(http client): tokio 0.2 -> tokio 1.0 * fix: better link * [ci]: github actions (#179) * docs(http client): tokio 0.2 -> tokio 1.0 * [ci]: remove travis * [ci]: add github actions. * [ci]: fix identation nits * [ci]: use cache for actions * [ci]: filter to clippy * [ci]: remove hacks * separate action for ci and benchmarks * [ci]: tweak to run on master branch. * examples/subscription -> examples/ws_subscription * force CI Co-authored-by: David <[email protected]> Co-authored-by: Maciej Hirsz <[email protected]> Co-authored-by: Atkins <[email protected]>
This PR simplifies the
HTTP client
and return the errors on each request instead of keeping the request until a valid response is received. A start to address #67, some tests are added to prove that the errors are handled correctly.I was afraid that this would impose a performance regression that is why I added benchmarks for concurrent requests such that bloating the executor with quite big futures as
async fn send_and_wait_for_response
which is more or less synchronous instead of having a background event loop.Another benefit is that this PR simplifies the code significantly when removing the background event loop.
This PR shows to 20-30% faster than
v2
, then it should be a no brainer if I didn't screw anything up in the benchmarks.TODOs:
http/raw
Follow up: