-
Notifications
You must be signed in to change notification settings - Fork 133
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Split out low level interface to a separate dbus-sys crate #85
Comments
I'm okay with splitting out the ffi module to a dbus-sys crate. The reason for not doing it is just a historic remnant. Feel free to submit a PR.
But if so, what is it that lists all registered paths for a peer? (I e, what you see with D-feet or other introspection tools when you click on a peer)
I believe they work with
You then check if you got an error back using the
The reason callbacks are minimal (i e, everything just goes into the pending_items queue), is because of panic handling. I want panics to properly propagate to whatever called the dispatch function. The design is partially historic as well, as
Is this the
Do you think it's broken beyond repair, or would you like to help out with fixing the things you see as problematic? I currently don't have that much time so a helping hand would be welcome. |
Ah, yeah, that's an important nuance, indeed. That would be the courtesy of So if you implement
Well, I'm also not sure if
Well, as I've said I haven't really wrapped my head over your library, so I'm not really sure how that will interact with more complicated aspects of it, like the tree module, but exposing
Yeah, I wouldn't blame anyone here. It's a rare enough use case and it's probably as unexpected to everyone as it was to me that dispatching to callbacks registered with This could be worked around if there was a way to install own filter in the chain before
I'm very sorry to say, that I do not have a lot of free time either. To really be able to offer any help I will have to understand your library much more deeply that I currently do. As with the case with If I started from scratch, I'd probably based the library on a bare-bones dbus-sys core + tokio on top of that + high level object interface on top of that. But I don't feel like I know tokio and libdbus threading behavior in just enough detail yet. I regret it much, but I probably won't be able to offer you any help any time soon, I'm afraid. On the upside, my current use case can be accomplished with your library as is, albeit in a somewhat hacky way, so there's no pressing need for you to do anything really. :) |
As long as you don't call
Makes sense.
Okay. Well, we need a non-tokio version of the dbus bindings too, and then we need them to share as much code as possible. So for me the more logical way is to build the dbus-tokio version on top of the dbus library. I assume you've seen that I've started (if not, see the async directory), but due to lack of time (and tokio understanding) I'm not that far yet. Or we could go the full way and get rid of the libdbus-1 dependency (and that library's peculiarities) altogether... |
Yeah, which is an artificial restriction, which is not documented anywhere and which precludes reliable use of multiple threads. Normally I would've expected the compiler to somehow prevent me from using a library in a way that would leak memory, which is bad for long-running services, such as what DBus would be often used in. The DBus's native analog of this functionality,
My opinion is that because blocking is a bad solution in general, I wouldn't expose most of the libdbus calls to user in anything but the raw form. The really useful parts of the library when you're writing a client are I would have wrapped just this functionality in Tokio. If user wants "simple blocking calls", you can build them on top of tokio +
Except for that task you'd only ever really need your
Yeah, saw that too. I'm not exactly proficient with Tokio either, so I'm not sure at the moment how the
Funny you'd say that. I had exactly the idea! But having investigated the matter a bit, I conclude that this endeavor is not for the faint of heart. Libdbus handles quite a lot of things for you, that you as a user never have to think about. You use
Finally, with any luck you're connected, so now you see dbus messages coming at you. Now you need to implement your own message de/serialization, which means
I wouldn't say that a simplistic no-fd-passing no-zero-copy client-side-only dbus library isn't possible. E.g. Rubyists managed to implement one. https://github.com/mvidner/ruby-dbus though its conformity to the spec is questionable. But it's gonna prove to be hard, and then libdbus may or may not incorporate some further security/performance fixes in the future, maybe kdbus will rise from the dead, maybe the spec will get updated, etc. Not to forget that Rust can't really handle failing memory allocations yet, so libdbus is fundamentally more robust in this regard. I would honestly applaud to the effort should anyone attempt it. |
Connection is not
I'm not saying you're wrong, but basic Rust design disagrees - with std being blocking I/O, and Tokio being a separate crate. So me doing the same would at least be consistent. Also, I'm not sure Tokio is going to be the long time winner of Async I/Os. I find the design quite hard to grok compared to other event loops I've used in other languages. (You can't even start listening to an fd without depending on mio, which is going to be problematic when they try to write Tokio on top of glib!)
But for the person who has a strong heart and lots of time, I think Rust would be a good language to write such a library in :-)
Actually, its successor is BUS-1. Let's see what happens with that. |
Hmm, I believe this is approximately the parts of Connection that dbus-tokio would end up using. Also, I've heard your objection about register_object_path vs filter, will look into that when/if I get time.
Btw, makes me wonder if it would be possible to use a half-measure so that the dbus library is used for connecting, but then you just grab its fd and do all writing and reading yourself. That way you would only get the second half of the problems you're describing. Also, for bus1, I believe the intention is to create a backwards compatibility layer on socket level, i e, if libdbus were to be rewritten in Rust, it would still work with bus1. Hopefully... |
Ah, I didn't notice that But anyway, should the You know what, let me put my code where my mouth is... done. Here's a PR. #86
Well, maybe the event loop itself isn't much different, but Rust has quite peculiar futures, and that affects everything.
Well MsgHandler is of limited utility as it stands. The unconditionally bolted-on
That's nice to hear. It might be that I also will find time and try doing something about it, then.
I rather think that the connect/auth part is much more easier than the rest. So I'd rather try using libdbus for all the message de/serialization thing instead. IIRC there are methods that allow parsing a dbus message from/into byte array, at least. The headache with adhering to the protocol and fd passing would be there still, though. But, at least, I looked at the alignments/endianness thing again, and I think it wouldn't be all that hard to implement it on par or even better than libdbus.
Well, from what I read, I can only imagine it working as an alternative faster transport, i.e. an alternative to UDS/TCP. But it's quite likely that it would be just an opt-in, so older libraries would still be able to work via UDS/TCP unmodified. |
I've just pushed a few commits for this. Let me know if you like it. :-) Maybe I should also add a switch that always returns "Handled" from the filter, which would entirely disable the default handler(s) if any. |
This should now be fixed in git master.
Well, Tree and MsgHandler are my two attempts at a customizable callback setup. What other type of callback setup would you suggest? MsgHandler is quite new and not many people depend on it, I think I could do a redesign and release v0.6 without anyone complaining much. Edit: Answering my own question so you don't have to: you suggested to
Did I miss anything? |
Yeah, those are great. Makes for a clearer mental model, at least for me, though users may still be tripped up by the fact that
You mean something like The only default handling in libdbus I see is
Of those only the latter seems like it could be missed, as Funnily enough, I can't seem to find the logic that prevents auto replying to Anyway, a simple "Sit back, I know what I'm doing" switch might be nice still.
The git changes are very helpful indeed. But the bolted-on aspect of The latter might be made possible with a simple boolean switch. Alternatively, one can register the queue pusher callback only if the user calls
Ah, that referred to the idea of exposing some refined version of For example, there's that little snag with the current design that, while eavesdropping, one has to call not only One possible design could be, yeah, making For an alternative, lower-level design, imagine
Then there would be
I don't think so. Though the whole design needs to be considered carefully, as well as the whole question whether you really want to reimplement everything that libdbus supports and guarantees already. E.g. for Timeouts is another can of worms. Even if you reimplement them yourself for |
So, when the queue push thing was implemented, I like the idea of having a callback that could determine whether or not to call libdbus's fallback methods. We just need to carefully consider what should happen in case this callback panics. Which is now possible to handle gracefully, now that
This sounds like something handled inside |
Yeah. Well, assuming we manage to make handler loop UnwindSafe, the first thing to try is just close the connection. I don't think other coping strategies are really valid here. I'm not sure right now, whether it's allowed from inside the filter callback, and whether it's allowed to call close() twice or we should also add a boolean field to the struct so that Drop doesn't do it again, etc. Don't have time to check sources, maybe later.
Right. I've seen the libdbus server code using timeouts for auth/connection, and assumed client code would do the same. So, now I see, libdbus client is blocking during connection. That's a bit sad. |
What do you think: 4f2870d Not sure why we want to close the connection in case of panic? |
That was compared to other more lax approaches like skipping failed handler (I imagined there would be a Vec of them) or skipping failed message altogether. Bubbling the panic up to
Looks fine. Though I wonder if it'd be better to add Now, though, since there isn't a Vec of handlers (which may or may not be a good thing really, considering the necessary dances like the one you did with More troubling still, is that they're unable to use their custom handler at all without calling If there would be a looping construct alternative to iter() that would solve the above problem, by basically exposing some form of |
Since the unwind is resumed, I don't think this is necessary? I e, unless the user catches the panic there is no way for the user to observe the potential broken invariants inside the user's filter_cb.
I'm thinking that making default_filter_callback public would be the most appropriate here. That way a user can also temporarily switch to another callback, and then switch back to default_filter_callback when done, which could be useful.
If you call iter() it will return a |
Yeah, it isn't necessary. I just thought that they might be catching it to guard against some other processing, never expecting
I, on the contrary, think that providing the getter would be most general, though making
Right. Of course it does, I'm not sure how could this have slipped my mind. I always write my responses while being too tired to think, it seems, and now I missed the negation before the queue emptiness check while rereading the code of |
By the way, what's the point of |
What do you think? #87 |
@albel727
No idea, so I like your suggestion to remove it :-)
Hmm. So I made a change that made set_message_callback return the old callback. I think that's the closest to a getter that we get? At least if we allow closures and not just "free functions without environmen", I mean - how do you make a (useful) getter to a closure?
FWIW, your hitrate (i e rate of correct responses compared to missed things) seems pretty good to me :-) |
Maybe we didn't. Thing is,
So I decided against including this method in libdbus-sys. But one can't I could add a new empty So I decided to simply copy the enum, since the libdbus constants are quite unlikely to change ever now, so no harm from duplication is expected. In fact the first thing I wrote was like
But then I wondered if all stable compilers support this form of initialization, and wrote the simple constant version instead. I also took this opportunity to rename I guess I can move it into libdbus-sys, and qualify by
Well, in my imagination it worked like that. There would be let old_cb = c.take_message_callback();
c.set_message_callback(|c, m| {
if m == what_i_want {
//do stuff
return true;
}
old_cb.map(|f| f(c, m)).unwrap_or_default()
});
Library::install_another_callback(&c);
//finally run it
for ci in c.iter() { ... } One can somewhat emulate |
Hmm. I was wondering if there should in fact not be a
Ah, so you meant to take rather than get. I guess one could also do something like:
...but I don't know if it would be better, really. |
Well, the same applies to I guess flag enums can be converted, e.g. to modules with consts in them, but I'm not sure there's much benefit to it. I don't expect there would be confusion about the fact that these enums don't strictly function as Rust enums, but as convenient groupings for constants. Using enums for flags is familiar enough that people are likely to implement something like On the other hand, it might be inconvenient to cast enum constants to ints everywhere. So, I'm not sure on the matter. Maybe I'll leave the decision to you. What do you think we should do here?
Yeah, I understand a literal getter there would be hard to come by, unless one would change
Yeah, that works too, sans a missing |
Ok, so I changed dbus-sys to what I think is better, and left it the way it was in dbus for backwards compatibility.
But it's still ugly. Hmm, what if we add |
Might as well do the same with WatchEvent then?
I think it's rather pointless. It would needlessly expand API surface, just to save a few keystrokes, while still making a Box allocation and having nontrivial side-effects instead of failing fast, if the user mistakes it for a true getter and/or accidentally calls it several times in a row/calls it again after setting own callback. It would mean a couple of confused minutes of debugging for the user to figure out why their application seems to almost function, seeing To compare, putting
|
Though I begin to doubt that something will be allocated on the heap for plain function arguments, thanks to trait object and ZST optimizations. Oh well. EDIT: Yeah, it wouldn't. |
Yup. Done.
Thanks for the review. I've listened to your arguments and changed the code accordingly. Have a look. The message callback can now be unset. One now has to write an extra |
Great. Here, to maybe save your time. #88 |
👍 Merged. Btw, are you happy with libdbus-sys as it looks now, i e, should I release a v0.1.0 of it? |
I think I am. Yeah, go ahead. I doubt the already existing stuff would change from now on, only some missing methods appended, which can well be a backward-compatible v0.1.1, if such need arises. |
This would be useful for separation of concerns and people will be able to build upon their own high level interface on the bindings.
Thing is, there are some inconveniences stemming from the current design, like the inability to handle MethodCall/Error from the iter(), etc.
Reliance on register_object_path() to handle objects is also something that can be avoided, because dbus_connection_register_object_path() and friends are just helper methods that basically create a BinarySearchTree<Path, Callback> (modulo fallback handling). User can totally use dbus_connection_add_filter() callbacks and implement own path dispatching instead.
The callback you unconditionally install with dbus_connection_add_filter() for the sake of iter() precludes the simplest "one callback for everything" handling that user might want, and you don't provide a high level function for installing additional user filters, that could bypass this limitation by allowing to handle messages before they're swallowed by pending_items queue or silently discarded altogether (like Errors or unregistered MethodCall-s). The Connection::msg_handlers() interface doesn't help with that in the slightest.
This is particularly inconvenient if user enabled eavesdrop on some messages, because then your object abstraction that relies on dbus_connection_register_object_path() begins to leak, since the latter doesn't check the destination field at all, so your objects begin to receive method calls destined for others as long as the path matches, and what's worse - answering them, resulting in multiple replies on the bus (from you and from the original destination).
You also don't seem to handle Error replies at all. In the case of Connection::send_with_reply() that implies a handler leak. A MessageReply instance will hang in self.handlers array forever, if the call fails, even if an Error with corresponding reply_serial is received.
All in all, I'm having a hard time wrapping my head over your library, so I would really appreciate if the extern definitions and message serialization/deserialization code would be split out to separate crates, so that one could build upon them.
The text was updated successfully, but these errors were encountered: