-
Notifications
You must be signed in to change notification settings - Fork 172
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Synchronization-less async connections in ws-server #388
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
neat, LGTM
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good to me! :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this is slightly harder to read than what we had (but it's likely just me struggling with reading async code).
Do you think you could elaborate a bit more on the tradeoff here, single-threaded RPC parsing sounds like it should be fine, but maybe we need to be careful with the amount of work we do in the parsing stage?
Finally: I imagine we need the same treatment for the http server?
if let Poll::Ready(Some(())) = this.stop_receiver.next().poll_unpin(cx) { | ||
return Poll::Ready(Err(DriverError::Shutdown)); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This bit replaces the select that was added on master
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It would be good to document how the shutdown process works and what connected clients should expect to happen to their pending requests.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So I'll try again: is this where we should send a CLOSE
frame?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, I reckon @niklasad1's planned refactor will make the code much easier to follow here too.
|
||
if driver.connection_count() >= self.cfg.max_connections as usize { | ||
log::warn!("Too many connections. Try again in a while."); | ||
continue; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is where we'd send a CLOSE
with a reason right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No, we just ignore the socket.
The close reason is only sent after the websocket handshake
has been completed AFAIU
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think you're confusing this PR with telemetry :). At this stage the WS connection hasn't been established yet, so you can't send a frame. A nice thing to do would be to send a HTTP response with appropriate status code.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A nice thing to do would be to send a HTTP response with appropriate status code.
Indeed I'm confused. And strong yes for returning a proper http status code.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Proper status code is 429 Too Many Requests
? Should we include a Retry-After
header too?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's do that in a separate PR, since that would need extra changes to soketto
too.
* master: (21 commits) New proc macro (#387) Streaming RpcParams parsing (#401) Set allowed Host header values (#399) Synchronization-less async connections in ws-server (#388) [ws server]: terminate already established connection(s) when the server is stopped (#396) feat: customizable JSON-RPC error codes via new enum variant on `CallErrror` (#394) [ci]: test each individual crate's manifest (#392) Add a way to stop servers (#386) [jsonrpsee types]: unify a couple of types + more tests (#389) Update roadmap link in readme (#390) Cross-origin protection (#375) Method aliases + RpcModule: Clone (#383) Use criterion's async bencher (#385) Async/subscription benches (#372) send text (#374) Fix link to ws server in README.md (#373) Concat -> simple push (#370) Add missing `rt` feature (#369) Release prep for v0.2 (#368) chore(scripts): publish script (#354) ...
An experiment of sorts that will run all incoming ws connections on a single task/thread. This has multiple advantages in that we don't incur synchronization costs or atomic ref counting, so things like the
Methods
orSettings
don't have to be wrapped inArc
s (methods can be shared between connections) or useArc
s internally (lists of allowed origins / hosts).If we really need to use all cores for handling RPC parsing (as opposed to spawning individual method calls on their own tokio tasks), the
ws-server
could start a thread pool and poll for incoming connections on each thread (TcpListener::poll_accept
only needs&self
, so we can cheaply shareArc<TcpListener>
between threads), although that would incur some costs for keeping track of the number of open connections.