Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

expose eventloop to allow timeouts to be set #14

Closed
alkis opened this issue Dec 14, 2015 · 14 comments
Closed

expose eventloop to allow timeouts to be set #14

alkis opened this issue Dec 14, 2015 · 14 comments
Assignees
Milestone

Comments

@alkis
Copy link
Contributor

alkis commented Dec 14, 2015

This will make it possibe to run a single client handler that sends periodic pings to a server possible.

@housleyjk
Copy link
Owner

I won't expose the event loop, but timeouts have been a goal since I started this project. This feature will be implemented. However, I am still grappling with the best way to provide timeouts in the API without forcing users who don't care about them to need to provide a timeout message. I am wondering if this needs to wait until we have associated type defaults: rust-lang/rust#29661.

If you have suggestions, or want to implement this yourself, please go right ahead.

In the meantime, you can do what you want by adding another component to the system using a separate thread. The pong example shows how to do this.

@housleyjk housleyjk self-assigned this Dec 15, 2015
@alkis
Copy link
Contributor Author

alkis commented Dec 15, 2015

Why are you against exposing the event loop?

@housleyjk
Copy link
Owner

TL,DR: I'm following the design implications of the MIO library and the current Rust ecosystem.

There are basically three, interrelated reasons why I won't expose the EventLoop to code clients.

  1. The design intent of Mio, which is the library that provides cross platform asynchronous io for WS-RS, is that the event loop should be encapsulated in a "component" and not shared between various parts of the system. This is a video from the author. This sort of encapsulation is exactly what WS-RS does. It provides a WebSocket component, that requires one thread to run, and it does that one thing, WebSockets, rather than trying to require your whole system to depend on asynchronous io.

  2. If you checkout out the docs for Mio, you will see that the EventLoop is generic over it's handler: http://rustdoc.s3-website-us-east-1.amazonaws.com/mio/v0.5.x/mio/struct.EventLoop.html. This means that each eventloop does one thing, namely, what it's handler does. This is by design, as mentioned above. In the case of WS-RS, the handler manages WebSocket connections. If I were to expose it to code clients, it wouldn't benefit them much because they couldn't make it do anything other than what it already does. They couldn't even use the timeout functionality because timeouts are defined as an associated type of that hander: http://rustdoc.s3-website-us-east-1.amazonaws.com/mio/v0.5.x/mio/trait.Handler.html. In other words, the eventloop isn't a generic eventloop that you can do io with, it's a EventLoop that just does WebSockets and not application code. The design of Mio and the memory model of rust are such that the eventloop is protected from modifications. This is nice because it allows me to provide strong conformance and performance guarantees, basically if you don't change any frames using on_frame or on_send_frame, you won't get protocol errors from other endpoints.

  3. I generally agree with Carl's approach. Rust abandoned libgreen-style tasks in favor of threads in order to promote compatibility and optimization. So, the current rust ecosystem is threaded, and it probably always will be. So, it makes sense to use what rust offers rather than try to force everything into an asynchronous model. I want all of the WebSocket connections on one thread because such connections are long-lived and strongly benefit from a single-threaded paradigm, but it's perfectly reasonable (and possibly more performance friendly) to use a threadpool for short-lived tasks. If we are going to make "everything" asynchronous, then we need to pay the cost and build a truly allpurpose eventloop on top of mio like these libraries do: https://github.com/tailhook/rotor, https://github.com/zonyitoo/coio-rs. I just want to provide WebSockets and not force users into an asynchronous paradigm. Ultimately, if you want asyncio for something like a db connection, someone will need to implement that, whereas if you use a threadpool, you can get going today. And once you are using threads somewhere, why should it be the case that asynchronous io should infect the project as a whole? The approach in this library is to provide asynchronous WebSocket connections rather than an asynchronous framework. If someone implements WebSockets using one of those more allpurpose mio solutions, I would love to see how the benchmarks turn out. I am a little skeptical of the benchmarks in https://github.com/zonyitoo/coio-rs because they show that Go is faster than Mio, but somehow, coio, which uses Mio, is faster than Go? It seems like the implementations are not consistent. How is coio, which performs extra allocations, so much faster than the raw, non-allocating library on which it depends? I don't buy it. Anyway, I digress, hopefully if you've got to this point, you understand my may point, which is that sharing the mio EventLoop is just not feasible.

However, I do want to allow people to use timeouts. Right now the timeout implementation is (), so theoretically, I should be able to make it generic such that implementors can use whatever timeouts they want on the eventloop.

In the meantime, I think that a single thread timer won't be any discernible performance penalty for you. In fact, I modified the pong example so that it checks latency every 1 second (instead of every 5), and I left the print the latency calls (which means there is all that overhead related to printing to stdout), and at 10,000 connections sending 10 messages each, the pong example completed in only 3644ms (about twice as long as the raw benchmark, I know, but printing is expensive. FYI, I left the printing in because I didn't want rust to optimize away the unused latency computation.) Anyway, that's still much faster than the last time I benchmarked rust-websocket. So, basically, what I am saying is that it is is highly unlikely that the timer thread is actually costing you any kind of performance penalty, and I am sure your system can spare one little thread like that. This is why I am doing things this way rather than forcing a total asynchronous paradigm on rust where it's not natively supported (anymore).

@nicokoch
Copy link

Nice insights!

I'm interested (and not that much familiar with sasync io):

With es-rs, if I get a request and the processing function is very complex. Will this block all other requests? If so, should I spawn a thread from the on_open function? What are the alternatives?

@alkis
Copy link
Contributor Author

alkis commented Dec 15, 2015

Thanks for the detailed answer. I really appreciate it. Since by design the EventLoop is not for reshare that's ok. I didn't dig too much into the implementation details of MIO because through your library this is the first time I am using it.

I moved away from websocket-rs because the model there required that I use 3 threads per connection to do what I want and teardown/shutdown became a headache. Not only that but the library was failing with broken pipe after long sessions and that's not good for my application. Adding more threads is sometimes a performance problem but there is a cognitive problem as well. In this case the latter is much larger than the former: you need a thread to run the timer, don't forget to join on it because you will leak and have shared state between them (the latency). If this could be accomplished in a single thread the code would be much less complicated.

So back to the original point:

  • no to sharing the EventLoop
  • yes to implementing Timeout support

Is there a plan to add Timeout in the near future? If not maybe you can give some insight for someone who might want to implement this functionality?

@housleyjk
Copy link
Owner

I understand, the cognitive burden of threads can be a pain. As far as timeout support, since you are interested, and you have the only active issue on github right now, I feel that I should move this to the top of my todo list. I think I have a plan for how to do timeouts without negatively impacting the current API or requiring associated type defaults. Give me to Friday, and hopefully I will have an implementation of this feature for you to try out.

@housleyjk housleyjk added this to the EOW milestone Dec 16, 2015
@alkis
Copy link
Contributor Author

alkis commented Dec 16, 2015

That's great! Thanks for taking this on.

@housleyjk
Copy link
Owner

The timeout feature is now in master. If you want, please try out the new pong example to see if the feature is what you were looking for.

@alkis
Copy link
Contributor Author

alkis commented Dec 18, 2015

This is great Jason. Thanks for taking a stab at it. What's unfortunate is that Sender::timeout does not return Resultmio::Timeout. Is it possible to hold a reference to the EventLoop inside sender and call timeout_ms directly on it to achieve this?

@housleyjk
Copy link
Owner

Rust's borrow checker forbids holding a reference in the Sender. If there were a reference to the EventLoop, it would freeze the eventloop. Basically, because of the asynchronous nature of the system, you have to accept the Timeout via a callback of some kind, which is what on_new_timeout does.

@alkis
Copy link
Contributor Author

alkis commented Dec 18, 2015

I can think of a way to make this synchronous but it involves quite a bit of state in the library and I don't like the tradeoffs :-). The name bothers me a bit as too similar to on_timeout. How about on_timeout_scheduled?

@housleyjk
Copy link
Owner

I considered on_timeout_scheduled when I was designing this, but I went with the on_new_timeout because it was more analogous to the existing method on_send_frame.

@housleyjk
Copy link
Owner

I am going to publish the API as is, but I am certainly open to changing the names if it bothers more people.

@alkis
Copy link
Contributor Author

alkis commented Dec 21, 2015

Thanks Jason. I am using it already.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants