-
Notifications
You must be signed in to change notification settings - Fork 735
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bufs #3
Bufs #3
Conversation
The other nice bit I like about this is how it makes registration quite clear and concise. https://github.com/rrichardson/mio/blob/bufs/src/reactor.rs#L108 I am a fan of registering one callback per event type, but this would require to manage its own Hash of token -> event -> callback, which is a fair amount of overhead if the user doesn't need it. They can always make their own using the single handler closure. |
Hey, I skimmed quickly from my phone. I will give it a closer look tonight. But first, so that I can understand more what you are thinking, I have a couple of questions. First, what is your strategy for setting timeouts? Second, what is your plan for cross thread interaction (waking up the reactor and sending / receiving messages)? |
Timeouts are just passed all the way from the reactor through the selector. If we want to standardize the high level interface, I suggest we use I'm not completely clear on your second question, I'll assume it's about a Timeouts this arbitrary event reactor would have to be scheduled in yet I have been working on a trait called Laudable which any object can I would recommend that we establish a set of conventions around event names So if a 2ndary thread wanted to know if a stream received some data, it my stream.on("data", |dta : &[u8]| {...}) I am on my phone as well. Maybe we should schedule a time to chat on gittit
|
@carllerche Why did you close this? Was there an out-of-band chat with @rrichardson? |
@carllerche tells me it was an accident 😄 |
My understanding of @carllerche's strategy is to avoid using another timeout thread by just baking timeouts into the reactor. Since it has to spin for some amount of time anyway, you may as well make it handle timeouts. |
My understanding is that @carllerche wants it to be possible to post an event to the IO reactor, like IOCP's PostQueuedCompletionStatus, or similar to eventfd, but without extra system calls or locks. If I'm not misunderstanding @carllerche's plan, it's to get timers and events out of syscall territory and into userland, where they can be implemented far more efficiently, but still handleable on a single reactor. But he should correct me if I'm wrong. |
Sorry, I meant having a reactor dedicated to scheduling (and nothing else). For the IO/selector style reactors, is not the timeout supported by the The opposite case poses a problem, what resolution is good enough for The poller would have to be set to timeout at 1ms, which means it will be a On Sun, Sep 7, 2014 at 2:07 PM, Yehuda Katz [email protected]
“Science is the great antidote to the poison of enthusiasm and |
Yes, putting a second thread into the reactor which just does calculated Perhaps we should delegate the combinations of events, or reactors emitting It can be found here: On Sun, Sep 7, 2014 at 2:14 PM, Yehuda Katz [email protected]
“Science is the great antidote to the poison of enthusiasm and |
I believe this is @carllerche's strategy. I've written code like this before (very inefficiently, and very naively), and I think a well-implemented version of this is a better option than trying to add the overhead of an additional thread and the attendant syscalls. |
Awesome. I'm fairly sure libuv uses this model as well. On linux this On Sun, Sep 7, 2014 at 2:26 PM, Yehuda Katz [email protected]
“Science is the great antidote to the poison of enthusiasm and |
@rrichardson kqueue has |
@rrichardson I spoke to @carllerche. The plan is indeed to use those kernel APIs, but not before first doing a userland check to avoid unnecessary trips to the kernel for the (common) case where things are already ready. |
Good to know. Let me know if you want/need my help on that bit. For now I There will be two layers, the top level event layer, AwaitableEvent, will At the lower level, the AwaitableEvent will support a mapping to the OS I have started to write up some design ideas here: On Sun, Sep 7, 2014 at 4:59 PM, Yehuda Katz [email protected]
“Science is the great antidote to the poison of enthusiasm and |
Hey, it turns out that the PR was closed because it was based off of the |
Gotcha. I can re-submit a new pull request but the only parts of my submission that I consider still valid is the handler refactoring. I am not happy with the events, and the run/io_wait is going to get a major overhaul for timeouts. |
So, like was discussed above, my goal for Reactor is to be more than just pure IO. I plan on implementing a user land coarse timeout system (probably default to ~100ms ), signal handling, and an efficient way to send messages to the reactor from other threads. All of these features require essentially having control of the reactor. A couple of goals that I have for MIO itself is to be single threaded (though there may be extra threads used to backfill features like signal handling on older platforms), though allow a higher level lib to build out a reactor cluster. So, this means that all of the features in MIO itself need to run on a single thread. Another goal I have is to have 0 allocations at runtime. Basically, pre-allocate memory before starting the reactor and then never allocate again. I'm still trying to figure out exactly scope you are thinking for MIO to try to get on the same page. Anyway, I have read through what you have done, and I think I understand what you are trying to do, however I don't think that it lines up exactly with my long term plan for Reactor. I'm not saying that I have perfectly laid things out as I have it (definitely not since I'm mostly trying to get to a feature complete stage and then focus on cleaning things up). I'm wondering if perhaps there is middle ground in cleaning up how IO polling happens, and make a Poll struct that has a register, poll(timeout) fn, and then poll.events() iterator or something that is probably close to what your goals are but then Reactor could use that or a user of MIO could just drop down to the Poll abstraction and use that directly. Besides that, there are a few things like I really don't like using fns as callbacks vs. traits (like I have now). State is needed and, if you look at master right now, i set it up so that Reactor::run returns the handler. This allows ownership to work out better. What do you think of the general strategy of starting by creating a simple Poll struct that abstracts over IO polling and have Reactor use that? |
I would also point out that you can obviously implement a trait for different kinds of functions, which makes traits strictly more flexible. For example we implement Conduit's Handler for the relevant functions in Conduit by default. |
A poll struct with a basic single-method interface interface for all events is fine. My main point in getting rid of handler was that there is no way to know what events the higher level interfaces might be interested in, and two seems rather arbitrary. For epoll, for instance, it should have at least 4, read/write/hangups/error. For other kernels it might be different. TBQH I have never bothered to look at kqueue beyond when it first appeared in FBSD. And then there is that other operating system. Then there are polling systems that we haven't even considered. Going off the deep-end in terms of composability, its not unfair to assume that someone could come up with an IRC Selector, or a MUD Selector. Who knows. My vision for event management is a bit different than yours, which is fine. I see now that your polling system is as ambitious as something like libuv. That is great. It is clearly a successful model. I am a fan of smaller, more componetized IO managers that can be composed together. My plan was to have a very high level event management interface (Awaitable) which could arbitrarily compose many very low level IO interfaces e.g. my unfinished async lib as an example along with a seperate scheduler, or other mechanisms. I could see a complete end-user application leveraging 3 or 4 differently purposed reactor threads. Awaitable is intended to be completely agnostic of any Reactor or other callback based system. It will register its own callbacks against arbitrary events. IO systems would use it to emit/translate events. This shields its reliance an any lower level event registration system. At this point I am happy to either help you wrap up the functionality in mio, as you deem fit (i.e. how about some feature requests :) ), or I can work on Awaitable, which would certainly be easier from a synchrony point of view. Once I get further down the development path, I am sure I will come to you with either feature requests or pull requests for new features. |
@wycats, I seriously didn't know that that was even possible. Wow. That is rather cool :) |
The goal here is to provide a minimal, low-level interface to the underlying kernel functionality. In other words, we don't want users to have to worry about the precise low-level interface of things like epoll, but we also don't want to get so high-level that we lose the ability to get optimal performance where possible. In the case of IO, that means readability, writability and errors. I believe @carllerche plans to add support errors via
As part of prepping for the mio project, I investigated
It's extremely important to us that this interface support extremely low-level control (and the attendant optimizations), while still being portable. In other words, we want this to be high-level enough to abstract the various efficient polling interfaces but low-level enough to maintain low-level optimizations. The next level of abstraction, I think, is what you're thinking about: it would use the task queue we were discussing above to support any kind of notifications (IRC or whatever). In JavaScript, all user-accessible code goes through the medium-level task queue, but this abstraction provides direct access to IO; that's why it looks a bit more involved. In JS, when you need higher-performance optimizations, you need to ask the browser (or Node) for a high-performance API. In mio, you can build it directly. That's actually kind of cool 😄
I think this will end up being performant enough for many cases, and is reminiscent of the kind of control you have in JavaScript (both Node and the browser). But it still means that you can't use the most performant kernel APIs to build your abstractions. From a 20,000 foot view, my view of things is something like this:
It may not work out, and all of the work may end up producing something that is no more efficient, in general, than the JS model for user-accessible code, but the whole point of Rust is that different, and I'd like to try! One final point regarding JavaScriptIn browser JS, handlers basically have two choices: (a) do very little work, (b) punt the work to Workers. Working with Workers is kind of nice, because it's a fully message-passing system (with no user-accessible shared mutable memory), but that imposes a pretty serious limitation: you are forced to either serialize the message you are passing (slow unless it's quite small) or represent it as a Transferrable (at the moment, limited to simple byte arrays). I suspect that a model that made more heavy use of Workers for CPU-intensive work would be far more popular if it was more ergonomic. In Rust, the equivalent of Transferrable is the As a result, I think it makes sense to augment the JS programming model, where most work starts off and gets done on the reactor thread, to a programming model where work starts off on the reactor thread but CPU intensive work is often migrated to a "Worker" (a share-little-or-nothing actor that gets its messages via Send). FWIW: I don't think @carllerche shares my enthusiasm for an augmented JS model in Rust, but a well-architected mio could support many different models. Indeed, that's the point. |
On Sun, Sep 7, 2014 at 9:33 PM, Yehuda Katz [email protected]
That's fine. Please include EPOLLHUP and EPOLLRDHUP and the other kernel
|
SGX support for v0.8.11
// TODO tokio-rs#1: Add a public, WASIp2-only API for registering // `wasi::io::poll::Pollable`s directly (i.e. those which do not correspond to // any `wasi-libc` file descriptor, such as `wasi:http` requests). // // TODO tokio-rs#2: Add support for binding, listening, and accepting. This would // involve adding cases for `TCP_SOCKET_STATE_UNBOUND`, // `TCP_SOCKET_STATE_BOUND`, and `TCP_SOCKET_STATE_LISTENING` to the `match` // statements in `Selector::select`. // // TODO tokio-rs#3: Add support for UDP sockets. This would involve adding cases for // the `UDP_SOCKET_STATE_*` tags to the `match` statements in // `Selector::select`. Signed-off-by: Joel Dice <[email protected]>
In https://github.com/rrichardson/mio/tree/refactor/src you will find my proposed refactoring of the Selector and Reactor interfaces along with ancillary supporting traits.
This set of changes was born out of a couple API design philosophies. I have been coding c++ for 15 years, but quite a bit of Haskell and Clojure as of late, so I have a strong functional bent which certainly influences my design decisions.
But some of my rules are:
I'm not listing these because I think other people should follow them. I don't expect anyone to agree with any of these things, or that my implementation even accurately reflects these beliefs perfectly :)
So with that as the background, let me try to explain what I did:
The first thing you might notice is that I reduced the size of the Reactor implementation by about 50%. Not only because I removed connect and listen, but because I changed the logic by which the reactor loops. It now loops conditionally on whether the handler says to loop or not (boolean). I think it puts more power in the hands of the reactor user without making the interface more complex.
In addition, I brought the ability to decide which events to subscribe, and also timeouts up to the top level of the Reactor interfaces. This puts even more power in the hands of the user.
I removed a couple circular dependencies by moving the IoEvent trait and Token out into their own modules.
The high level goal was to enable the user to define the Events in which they were interested. So the Reactor interface now supports the registration of any event that is part of the IoEventKind type, which has been pulled out into its own module, events.rs
Note that this is now completely decoupled from the notion of a Token. I don't know if this was a design choice or just a technical artifact of working with epoll. Either way, I don't think that epoll's specific implementation should be influencing the design of the high level interface.
I removed connect and listen from Reactor. IMO they have no business there because Reactor is for Files, Pipes and many future things that have no notion of connect or listen. Those should absolutely be part of the customer's domain.
I removed the handler trait entirely. All that's needed here is a callback. If a user wants to manage their IO with a struct, let them, but that should be no business of ours. The callback function leverages this new notion of decoupling the event subscription/alert (as much as possible) from the reactor interface.
The os/epoll implementation has been modified to reflect the new IoEvent trait. This tacks the IoEvent (generic) functionality directly into the low-level nix::sys::epoll EpollEventKind. No need for an intermediate structure here.
I moved the EpollEvent array that is passed to select() to be a member of the Selector struct.
I am not particularly happy with this. The reason I did it was to hide the guts of the array (which are EpollEvent, and the user should know nothing about that) . This breaks the model slightly, because if someone wanted to run reactor.run simultaneously in multiple threads, which is possible but not common, this would break. If we wanted to let the reactor create it while hiding the EpollEvent guts (or doing nasty casts) we would have to do it in a Box. This would not a big deal, IMO, as the alloc is still out of the hot-path of event processing.
I modified Yehuda's reactor unit test to reflect the way I envision people using this interface. As you'll now notice, run_once is simply accomplished by returning false from the handler callback.
I rather like it, but let me know your thoughts. Thanks.