-
Notifications
You must be signed in to change notification settings - Fork 41
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Async/await pattern #318
Comments
I'm not quite sure what you mean with "more freedoms" in your second paragraph. For the first paragraph: Correct me if I'm wrong, but this would either require an extra thread that does all the reading (and that should also do all the writing, but that's not possible in the current design) or would require writing our own executor. If I understand the Any ideas on how to do that exactly? Also, what is your use case? Do you suggest it based on "it would be cool" or do you have something in mind that is only possible in an ugly way currently and would be much nicer that async/await? |
From a user's perspective, probably yes. But it doesn't have to be, as for example, tokio has a single threaded executor. From the library's perspective, all we need to do is implement the
While we could do it this way, and managing As for "freedom", consider this example: // has to flush and wait for reply on each request
conn.create_window(COPY_DEPTH_FROM_PARENT, win_id, screen.root, 0, 0, 100, 100, 0, WindowClass::InputOutput,
0, &CreateWindowAux::new().background_pixel(screen.white_pixel))?.check();
conn.map_window(win_id)?.check(); compared to // minimal code change, but now this can happen concurrently with other requests
conn.create_window(COPY_DEPTH_FROM_PARENT, win_id, screen.root, 0, 0, 100, 100, 0, WindowClass::InputOutput,
0, &CreateWindowAux::new().background_pixel(screen.white_pixel))?.await?;
conn.map_window(win_id)?.await?;
Mostly, because this is cool. But I also think the libxcb model is harder to reason about. For example, if I don't explicitly check, i would get errors as events, which is hard to correlate back to the original request; if I do check, then I introduce waits in the code. And it's difficult to express "i want to ignore the errors". While async/await lets me do these naturally. Also, there are problems like: let cookie1 = conn.create_window(COPY_DEPTH_FROM_PARENT, win_id, screen.root, 0, 0, 100, 100, 0, WindowClass::InputOutput,
0, &CreateWindowAux::new().background_pixel(screen.white_pixel))?;
conn.map_window(win_id)?.check();
cookie1.check(); // <- can i still do this? I know this is a synthetic example, but the point is that, it seems to me, the cookies are quite leaky abstractions, and is very "un-cool". |
Also, there is the argument of if we are using Rust, we should make the best use of the tools Rust provides. |
Hm... doing this properly would require also making the actual sending of requests |
Yes. I have realised it might be a better idea to start a new "async-x11rb"
project from scratch, instead of grafting async onto x11rb.
But probably a lot of the works done here could be reused, like the code
generation stuff.
|
Well... after thinking a bit more about it, I am not quite sure how a X11 connection can be shared between futures without much copying and with async request sending. Let's say future A wants to send a big request to the X11 server. The kernel accepts half of the data, so the rest needs to be resubmitted later. The "no copying" requirement means that now all other futures are blocked on this one finishing sending its request (which is fine - in x11rb, something similar happens with threads instead of futures). However, now it is possible that future A is just dropped instead of polled again. Due to the "no copying" rule, we now got a big problem. (And this situation cannot even be detected - future A could be leaked instead of dropped). The only ways around this that I see are:
For your code example above: Well, you just replaced Also: let cookie1 = conn.create_window(COPY_DEPTH_FROM_PARENT, win_id, screen.root, 0, 0, 100, 100, 0, WindowClass::InputOutput,
0, &CreateWindowAux::new().background_pixel(screen.white_pixel))?;
conn.map_window(win_id)?.check();
cookie1.check(); // <- can i still do this? Yes you can. Everything else would be quite a bad API (because it allows simple mis-use). In fact, you should do it this way to save a round-trip to the X11 server (both requests are sent together instead of one after the other). Actually... going the "Future way" means that people will most likely
My plan with #314 is that the error would at least tell you which request failed (as a string). That way, you only have to search for calls to I guess I could also add some debugging tips to |
Random idea: Add a function on cookies that gets a callback that is called when the reply is available. That's basically what is needed to implement
(The name of the function is intentionally bad and needs improvement) |
I think this would in practice be not very useful due to lifetime
constraints.
|
I don't think this is a bad option. the ownership can be returned when the future resolves.
Because they are doing different things. If you want to replicate the
Ok, I wasn't aware of that. How is it implemented? Does the library keep the reply indefinitely? |
Replacing
Yup, and so does libxcb. Although with libxcb it is a lot easier to get a memory leak this way since Rust has |
Well, some more information about this: Every request has a sequence number. The first one that the client sends (the connection setup) is request 0, the next request has seqno 1 etc. A cookie is (in both XCB and x11rb) basically a wrapper around the seqno of the request. The X11 protocol itself uses 16 bit sequence numbers, but since the server handles requests in-order, one can reconstruct a larger sequence number from incoming "stuff" (the client just has to make sure to send a request with a reply at least every XCB and x11rb use 64 bit sequence numbers (well, XCB originally used Thus, every request has a unique number that identifies it and getting a reply is basically a lookup in a |
(Very) loosely related: https://github.com/Diggsey/posts/tree/master/async-mutexes |
If we want to go down this route (currently I don't), I think it would be best to split up the crate. Something like A bit of context: I have been thinking about how to integrate X11 stuff with calloop, but haven't found a good solution. |
I hacked together an ugly hack ontop of Smithay/smithay#254 that provides callback-based request handling. Consider it a proof of concept. There is an If we add something like The best thing would be to split up the crate, I guess. Something like I still don't want to go down this route properly. A proof of concept is enough for now. The proof of concept deals with the problems with lots of |
So it's been ~2 years and I've gotten more familiar with Rust async, and I was looking into this again, reading the code trying to figure out the least intrusive way of adding async. I came across a question that's tangentially related, but I don't want to open an issue just to ask that, so I will ask it here. The function
And it doesn't differentiate these 2 cases, this can cause a problem for multithreaded programs when:
At this point thread 1 is blocked even though there are packets in the queue. I didn't read the whole codebase through so maybe I am missing something and this isn't a problem. Curious to know what do you think about this? |
All of the connections involved implement I think it'd be relatively easy for the pure Rust connection, since the current methods would just replace |
@yshui Who exactly is calling If you try to use (All of this applies to libxcb as well, I think) If I misunderstood you: Sorry. Please clarify where this @notgull Yup, I guess that would more-or-less work. But Actually, async reading is the easy part, I think. In fact, I guess I would just spawn another task that does nothing else than reading from the connection and putting packets into some kind of queue. This would also work in the sync case: Just spawn a thread. I just didn't like the idea of spawning threads behind people's back, so lots of complicated code was necessary. Anyway, the hard part with async is writing: X11 requests can be quite large (
It's called |
@psychon Thanks for the answer! You understood it correctly. You are right that multithreaded use of the connection like this is racy. My example probably doesn't make much sense, but I came up with it because I was thinking in a more async context. Each async tasks behaves somewhat like an individual thread. And they might need to individually call The solution to this would be making the As for the buffer problem, personally I think copying the buf isn't all that bad (the max size is 16M, but realistically most of the requests should be small, unless you are sending a 1000x1000 bitmap over the wire). But even if you don't want to do that, I can settle for sending the request synchronously. Receiving the reply/events asynchronously is what I really want. |
Yeah, true. Which is why x11rb/src/rust_connection/mod.rs Line 378 in df1ff30
Something similar would be needed for "the async world". I guess the best approach would be to start from "almost zero" with a new connection (the existing Hm.... we already have the |
@psychon I don't think this is really a problem. At an await point, yes, a future can be dropped. The solution, then, is to "poison" the display to prevent further communication. libxcb does something similar on error. Although, I have to wonder if it's possible to recover from, if buffering is an option? |
Here is a proof of concept for my channel idea: https://gist.github.com/psychon/0b8b59b30d3253254b37e6267dbee471 [dependencies]
tokio = { version = "1", features = ["full"] }
x11rb = "0.9" All of this is basically untested. But this provides API for sending requests, receiving replies, and receiving events. All of these only need a shared reference. I tested this against x11rb's Proper error handling is left as an exercise for the reader. Doing something sane when the connection is dropped (stopping the reading and writing tasks) is left as an exercise for the reader. I wonder why the One could of course also easily use @notgull 's poisoning idea instead of the channels for writing. For reading, I feel like the channels and the extra tasks simplify things a lot. (Oh, hey, @notgull also just handled the "shut everything down on drop" problem: When the write-end of the connection lives in the connection, it is also dropped. This |
@yshui So, what's your position on tokio vs async-std vs anything else? Any preferences? I looked at this briefly, wanted to do everything perfect and abstract over all possible runtimes, but... well, that's not really possible, I guess. I feel like I am using the wrong approach when I type the following: #[derive(Debug, Clone, Copy, PartialEq, Eq)]
#[non_exhaustive]
pub enum Runtime {
Tokio,
AsyncStd,
} |
@notgull Basically same question to you. I took a look at breadx and it has |
The way I did it was that, if you ran with This isn’t an ideal solution. What I’m working towards now is doing away with |
So it's possible to write this in a runtime agnostic way. the standard library provides a |
So, a runtime? The main advantage of runtimes is that they allow for cooperative scheduling, and that advantage is lost in rolling your own like this. |
@notgull this is more like rolling our own reactor (although this term is no longer used). since we are only managing a single connection I don't think this is so bad. the async tasks are still scheduled by the runtime, it's just we need to manage the connection ourselves. |
@yshui My main concern is, is the advantage of runtime independence worth the overhead that would be incurred by rolling our own thread? Not only because of the thread overhead, but on Linux (which is basically the main platform X11 runs for) most runtimes use |
@notgull most async runtimes use thread anyway (well, i guess some let you choose, but most of the times you will be using threads). I don't think have a polling thread would be that much overhead anyways. And we only have a single fd to worry about so I think poll will be fine. Unless the user wants to make a whole bunch of connections to the X server, which would be a weird way to interact with the X server, this shouldn't cause any concern. |
Threads, technically. But on top of that, there’s usually several other async runtime components.
We only have one fd, but what about other packages? Keep in mind X11 is used as a graphical front end for programs that are intended to do other things as well, like write to files or communicate with servers. Having one thread that polls all of those other fds and another just for X11 seems a little silly.
It has at least twice as much overhead, since you're now calling In addition, I’m sure maintaining an entire runtime/reactor just for async X11 is out of the scope for this crate anyways. |
It would be just a thread that calls poll() and reads the data and feed it into the rest of the async support code which we have to write either way. I bet it would be less than 100 loc.
the alternative is to:
So, trade-offs.
I don't see how would that double the overhead. well I guess the best solution is decoupling the I/O from the rest of the protocol. this way the user could plug our fd into whatever runtime and drive x11rb from there. but just have an extra thread seems so much simpler. |
My PoC hack above does not only use tokio for I/O, but also uses channels and mutexes from tokio. Without these, I do not really see how requests sending could work with Reimplementing these should be relative easy to do, but also feels like "not invented here"-syndrome. Reimplementing the wheel cannot be the best answer to this ecosystem split. I also thought "just use a trait and let users plug in an implementation", but, well, (Hm, this could work with hand-written futures... Is this a better idea than writing a new runtime? And |
I think discussing it like this is kinda moot because there could totally be things I didn't think of that makes my approach 10x harder. I wish I could put in time to make a PoC but right now I am working on something else. Edit: @psychon yeah I forgot about the mutexes.
There's |
This is actually what I do at the moment for breadx, for reference, and I plan to continue doing it into future versions. |
Instead of using the cookie+reply pattern used by libxcb, Rust's async/await pattern seems to be a much more ergonomic way of doing asynchronous communication with the X server.
Also, the libxcb pattern allows the user to control when requests are sent and when buffer are drained by calling
xcb_request_check
and the*_reply
functions. Whereas the async/await pattern would allow us more freedom in that regard.The text was updated successfully, but these errors were encountered: