-
Notifications
You must be signed in to change notification settings - Fork 50
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
reactor callback prototypes need redesign #124
Comments
Couple more issues:
|
A few ideas from discussion with @grondo today:
|
Ideally we would allow users of flux api to BYOEL (Bring Your Own Event Library). I think exposing specialized flux watchers for libev is the right first step, and in the future someone could write bindings for whatever event loop they are using (e.g. libuv) so that flux could be more nicely integrated into their application (ie. tools would not necessarily need to create a thread to handle flux messages separate from their own main loop) This may also reduce code in the flux api implementations since we could defer "generic" handlers (fd, signal, etc) to the loop implementation. Therefore, I don't think this approach would lock us into libev forever, or at least not users of flux api, at long as we think we can embed necessary flux core functionality in "watchers" generic enough to be implemented in any/most modern event loop interfaces. I haven't thought about your idea in detail, but I think this work could be nicely staged into the following work items
This probably grossly oversimplifies things, but is my general opinion of a good direction to move in. |
Two thoughts:
Are those two points compelling enough to warrant keeping a streamlined version of the built-in reactor while still providing the specialized watchers for integration with an external libev loop? |
I was thinking mainly of flux api users in my comments above. Comms modules themselves seem specialized enough that they would always use the "event loop implied" style you have now. I can't see any reason why a comms module would need to use a different event loop implementation from cmbd itself. Sleepable RPCs in an event driven framework is a generic problem, I guess. I don't have a good answer for that one, except that the common way to do it (though I'm far from an expert) seems to be to have async callables that invoke a callback when the blocking function returns (e.g. see async dns lookup implementations). Coprocs seem like a neat solution but may not help in this case if native support from your event loop is required. (Also, what about any other blocking function an API user Sorry, I may have gotten this issue a bit off topic. However, I think we've resolved that
|
Oh, it suddenly dawned on me that the handle implementations really just need to export a file descriptor that, when ready, indicates that For api users we could offer a choice of the internal reactor (the same one implied in comms modules), or the above. |
How would you notify the pipefd/eventfd without having a callback wired into the event loop the application is using, or a thread? Or could you piggyback on something zeromq is already doing? |
Maybe aggregate internal fd's into one epoll fd? |
Specifically, the (I admit I tossed out that idea without having thought it through!) |
Yes, that makes sense. So flux would also provide a callback or function that user would run when there was activity on that fd to process flux messages and events, or would they invoke a "run once" instance of the flux reactor? Or were you thinking that users would directly use |
I was thinking that the user would call E.g.
At least in this model we are not in the event loop business. |
I understand. Unfortunately I think each user would end up writing their own dispatcher, which is not nearly as nice an interface as we have now (IMO). What would end up happening, I'd guess, is that we'd be writing wrapping a "convenience" dispatcher for users anyway, so things like kvs watches would work without being cumbersome for the calling library. The example above also exposes zmsg_t... |
Following on your ideas, if flux api provided an fd to register to any event loop, and a callback to process/dispatch all flux related callbacks, I think we could get what you want (get out of event loop business) and what I want (simple APIs to register callbacks for various flux events). I.e. API users would register callbacks to the flux handle as they do now, then they would call a "wireup" function to register the flux reactor to their own event loop, along with a callback that would dispatch flux callbacks as needed when there is flux msg activity. This approach could also allow us to abstract the This may be what you were talking about all along, and it just took me awhile to understand it. |
Yeah I think that is what I was trying to say two comments up. So should we bite the bullet and encapsulate |
The plan we discussed above is, to summarize:
API users would integrate the eventfd and dispatcher into their own event loops. Comms module users would get an implied libev loop and dispatcher. They would have direct access to the libev event loop so we don't have to wrap those calls to allow them to extend it. Are we still thinking this sounds like a good idea? |
A consequence of exposing the |
Ok, I'm having trouble wrapping my head around this one. I found the provision of a simple event loop interface within flux with specific flux-y things to be nice. I think in porting all the Lua bindings and other event-based code to raw libev, I would be tempted to write a convenience library anyway that simplified common operations, so I'm not keen on completely deprecating everything in reactor.h. However, I do think we definitely need Can we compromise and add a Also, your comment about libev headers is a good one, I don't have any good ideas about that today. |
Thanks for that @grondo. WIll ponder also, and your compromise sounds perfectly reasonable. |
Maybe the accessor for the event loop (with aforementioned problems) can be omitted as long as there is a way for a user to substitute their own event loop using |
I am working on this code right now and wanted to put a mini brain dump in this issue on the internals to clarify my own thinking, leave some detailed notes for posterity, and (possibly if anyone has time to read this) get some feedback. Pattern after Zeromq Event Loop Integration The entries for
or equivalent (perhaps like Refactoring the Flux Reactor In current master e.g. ad54d7d the internal libev reactor loop is replicated in each "connector", which provides a set of callbacks that allow that reactor to be accessed from the generic reactor interface in the handle. Each connector also interposes a message queue in front of its socket for
The internal (generic) reactor loop will then watch the connector's pollfd file descriptor. The pollfd become readable whenever the POLLIN, POLLOUT, or POLLERR bits for the connector are raised, in an edge triggered manner. When the pollfd becomes readable, it indicates that pollevents should be read, and send/recvs can be handled one by one, checking pollevents again after each is handled until poll bits of interest are no longer set. re-queue queue The generic reactor interface will need to multiplex between the connector and the edge triggered necessary? The downside of edge-triggered notification is that integration with a level triggered event loop requires machinations like the ones in the blog post above, which took me a little while to get my head around. Ideally you'd like a file descriptor that behaves more like a regular one, that is, level triggered, and responding as expected to being poll(2)ed with subsets of POLLIN, POLLOUT. Then you could just register it with your event loop and call However, I don't think its possible to do this. For one thing, file descriptors from I'm open to suggestion here if I'm missing something obvious. |
Thanks @garlick, great summary of the issues! Overall the approach above of having each connector expose an fd and wrap a generic reactor implementation around each connector's ops sounds like a massive improvement and simplification of the reactor and individual connector's codes. I didn't know anything about wrapping up multiple fds into a single epollfd until now -- It might take me awhile to wrap my head around that one. If we try to expose a "flux reactor fd" for users to integrate into the poll or event loops of their own applications, I worry that having this edge v. level triggered will make that very bug prone if not impossible for a "casual" user. Perhaps we can document the usage very explicitly, or build some tools around the implementation to make this work the way people expect. Otherwise I would have to assume it might not be useful or not used very often. That being said, staying away from creating another thread is a good idea. Also, for my own benefit, could you elaborate on why |
The separate queue is necessary because zeromq doesn't let you treat a zeromq socket as a raw queue, i.e. you can't avoid zeromq's semantics for message flow (like the dealer-router push/pop of address frames), you don't always have access to the "sending end" of a socket, and sending only allows you to append to the internal queue; it's not possible to push a message to the otehr end. |
has this been adequately addressed by #225? |
Closing. We can open up new issues for further tinkering with the reactor API. |
Some issues with the current reactor callback API design
zmsg_t
from CZMQ zmsg class.zmsg_t
destruction byzmsg_send()
and our functions that wrap it is confusingThe text was updated successfully, but these errors were encountered: