Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Synchronization-less async connections in ws-server #388

Merged
merged 14 commits into from
Jun 30, 2021

Conversation

maciejhirsz
Copy link
Contributor

An experiment of sorts that will run all incoming ws connections on a single task/thread. This has multiple advantages in that we don't incur synchronization costs or atomic ref counting, so things like the Methods or Settings don't have to be wrapped in Arcs (methods can be shared between connections) or use Arcs internally (lists of allowed origins / hosts).

If we really need to use all cores for handling RPC parsing (as opposed to spawning individual method calls on their own tokio tasks), the ws-server could start a thread pool and poll for incoming connections on each thread (TcpListener::poll_accept only needs &self, so we can cheaply share Arc<TcpListener> between threads), although that would incur some costs for keeping track of the number of open connections.

Copy link
Member

@niklasad1 niklasad1 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

neat, LGTM

ws-server/src/server.rs Outdated Show resolved Hide resolved
Copy link
Collaborator

@jsdw jsdw left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good to me! :)

Copy link
Contributor

@dvdplm dvdplm left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this is slightly harder to read than what we had (but it's likely just me struggling with reading async code).

Do you think you could elaborate a bit more on the tradeoff here, single-threaded RPC parsing sounds like it should be fine, but maybe we need to be careful with the amount of work we do in the parsing stage?

Finally: I imagine we need the same treatment for the http server?

ws-server/src/server.rs Outdated Show resolved Hide resolved
ws-server/src/server.rs Outdated Show resolved Hide resolved
ws-server/src/server.rs Outdated Show resolved Hide resolved
ws-server/src/server.rs Show resolved Hide resolved
ws-server/src/server.rs Show resolved Hide resolved
Comment on lines +190 to +192
if let Poll::Ready(Some(())) = this.stop_receiver.next().poll_unpin(cx) {
return Poll::Ready(Err(DriverError::Shutdown));
}
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This bit replaces the select that was added on master.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It would be good to document how the shutdown process works and what connected clients should expect to happen to their pending requests.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So I'll try again: is this where we should send a CLOSE frame?

Copy link
Contributor Author

@maciejhirsz maciejhirsz Jun 30, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, I reckon @niklasad1's planned refactor will make the code much easier to follow here too.

ws-server/src/server.rs Outdated Show resolved Hide resolved
ws-server/src/server.rs Outdated Show resolved Hide resolved

if driver.connection_count() >= self.cfg.max_connections as usize {
log::warn!("Too many connections. Try again in a while.");
continue;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is where we'd send a CLOSE with a reason right?

Copy link
Member

@niklasad1 niklasad1 Jun 30, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, we just ignore the socket.

The close reason is only sent after the websocket handshake has been completed AFAIU

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think you're confusing this PR with telemetry :). At this stage the WS connection hasn't been established yet, so you can't send a frame. A nice thing to do would be to send a HTTP response with appropriate status code.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A nice thing to do would be to send a HTTP response with appropriate status code.

Indeed I'm confused. And strong yes for returning a proper http status code.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Proper status code is 429 Too Many Requests? Should we include a Retry-After header too?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's do that in a separate PR, since that would need extra changes to soketto too.

@maciejhirsz maciejhirsz merged commit 7496afe into master Jun 30, 2021
@maciejhirsz maciejhirsz deleted the mh-async-driver branch June 30, 2021 15:48
dvdplm added a commit that referenced this pull request Jul 6, 2021
* master: (21 commits)
  New proc macro (#387)
  Streaming RpcParams parsing (#401)
  Set allowed Host header values (#399)
  Synchronization-less async connections in ws-server (#388)
  [ws server]: terminate already established connection(s) when the server is stopped (#396)
  feat: customizable JSON-RPC error codes via new enum variant on `CallErrror` (#394)
  [ci]: test each individual crate's manifest (#392)
  Add a way to stop servers (#386)
  [jsonrpsee types]: unify a couple of types + more tests (#389)
  Update roadmap link in readme (#390)
  Cross-origin protection (#375)
  Method aliases + RpcModule: Clone (#383)
  Use criterion's async bencher (#385)
  Async/subscription benches (#372)
  send text (#374)
  Fix link to ws server in README.md (#373)
  Concat -> simple push (#370)
  Add missing `rt` feature (#369)
  Release prep for v0.2 (#368)
  chore(scripts): publish script (#354)
  ...
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants