-
Notifications
You must be signed in to change notification settings - Fork 99
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Async IO for embedded concurrency #23
Comments
It is significantly less ergonomic, usually.
It removes the need for a machine-level scheduler, which has the indirect benefit of greatly conserving stack space, removing the need to care about stack overflows (which are often hard to detect), and making the code more portable by eliding assembly. |
Stack space is conserved because the task knows when it will context switch, so it can minimize the amount of information that it needs to store when idle, right? Like in the future-rs world, the "stack" space would be limited to the size of the final struct that implements Is my understanding correct here? |
Pretty much. There's no real tasks in the futures world, hence no stack space reserved for them. Note there's also no reliable way to predict stack usage with rustc, and no especially good way with C compilers either. |
If my understanding is right, with async IO we still need some kind of event loop:
I do not really understand how to implement this. Future polls are non-blocking. What the idiomatic way to wait for an hardware event without continuously polling a future? How to give back the time you do not need? |
Give back the time to whom? There isn't anything else running on the same core. You just poll the future continuously, with the exception that you may want to sleep and wait until you get an interrupt or a timer expires. |
Generally true for basic MCU development. One big reason to design for periods of "blocking" is to enable putting the processor into a lower processor power state until an interrupt (I/O or timer interrupt) wakes up the processor again to do some work. |
@posborne I specifically mentioned that. You don't really have to give time back or even track future states in a more finely grained way. It's enough to recheck all the ready bits at the next wakeup, and make sure to set up a hardware timer to wake you up at the closest software timer expiry. |
Since I'm really really really new to embedded programming I would like to see a blog post talking about the theory and the possible Rust implementations coming out from this kind of issues. 😹 |
So it sounds like, in the embedded world, "async IO" really just means co-routines. So I wonder is there any benefit to using a I'm asking because there seems to be a lot of excitement around futures in rust, and I'd like to know if something like that would be a good thing for the embedded rust community to rally around. If it makes writting concurrent embedded code any better then it could be a big selling point for rust in the embedded world. |
So I've built both single threaded 'co-routine' style systems and many-threaded message passing systems on a variety of embedded architectures, and there are merits to both approaches. For Rust to work for embedded development, we need a driver framework that supports all of these approaches, and more we haven't even thought of. Sometimes I'll want to spin up a serial thread that blocks on the UART and posts messages to a queue when characters arrive. Sometimes I'll want to build some sort of async system that can pend on both UART characters received and a network socket simultaneously. Sometimes I'll want to spin polling four UARTs in rapid succession and do nothing else. It just depends. The key, I think, will be starting to produce drivers that are flexible enough to work in all these environments, without un-necessary dependencies or forcing people to consider/learn a model they'd rather not use. I've made start with https://crates.io/crates/embedded-serial and I'd like to see more common functionality abstracted through traits like this. |
I think futures is a really promising paradigm for async I/O on embedded systems. However, I'm still skeptical that it's actually possible to pull it off without dynamic memory anywhere. @awelkie we actually don't use co-routines per-say, components in the kernel are just cooperatively scheduled (but co-routines implies notions of independent execution stacks, which we intentionally wanted to avoid since it falls into basically requiring dynamic heap memory). For reference, generally speaking aync I/O code for, e.g. UARTs, I2C sensors etc, isn't too unwieldy without something like Futures in our experience. Here's a totally asynchronous accelerometer driver. It's 200 lines, but most of that is a helper enum to name the sensor registers. It's a relatively simple state machine, so the code is reasonable to follow and relatively easy to get right. That's been pretty representative in our experience. An outlier is an RF233 driver we're working on. It's relatively big (nearly 1000 LoC) and because the RF233 has lots of states and sub-states, the code is difficult to follow straight line. Conversely, as is the case generally with state-machines, if you want to understand something like "am I handling the TX_READY state properly?" or "have I covered all cases in that lead to the error state?" it's easier, IMO, to follow then the kind of code that results from futures. For comparison, our process based implementation (ported from Contiki basically and in C and soon to go away) is probably easier to read at first but harder to debug (again, IMO): https://github.com/helena-project/tock/blob/master/userland/examples/rf233/rf233.c All of that is really just data points and not an opinion. Overall, I think the requirements for async I/O on embedded are sufficiently different than on desktop/server/mobile that I'm skeptical the same abstractions will work. Similarly, merging both synchronous and asynchronous models in the same traits/crates/drivers, while accounting for various kinds of execution models won't work, I think. For example, @thejpster, while |
Without wishing to derail the thread, I can't immediately see how it would be safe to share immutable refs to hardware backed resources. I'll try and dig through the Tock source and see if I can get my head around it. |
@thejpster short answer is Cell (actually VolatileCell, generally but same idea). Happy to elaborate on IRC, email, or a separate thread or whatever. |
More generally, the concept is called interior mutability (https://doc.rust-lang.org/beta/book/mutability.html#interior-vs-exterior-mutability). |
For those who haven't seen yet. I have wrote a blog post about the Real Time For the Masses (RTFM) framework which brings a different approach to the table: event driven tasks rather than poll based futures. So instead of writing a huge event loop in end of pitch @enricostano You may want to check that blog post but you should first start with this one. As for the I/O API requirements of the RTFM framework; it has none. You can use a blocking API and it would stil work but it would reduce the system responsivity. So far I have been using a really simple nonblocking API (see here) built on top of svd2rust generated device code and that doesn't depend or know anything about the RTFM framework. I think that nonblocking API could be easily be extended / adapted to work with futures if we follow the As for the embedded traits, I think those should arise from writing applications using different concurrency models (futures, tasks, threads, etc.) and then figuring out what common parts can we abstract away, rather than trying to come up with them upfront and hope that they will fit with the existing frameworks.
It's certainly doable; you just have to implement the trait for a shared reference. For example: impl<'a> BlockingTx for &'a MySerialThing {
fn putc(&mut self, ch: u8) -> Result<()> { .. }
}
// then
let mut serial: &MySerialThing = ..;
serial.putc(b'H'); But I see that @thejpster has added immutable versions of the traits. |
@japaric @thejpster ack regarding immutable references. It still seems to assume a quite different model for dealing with busy HW, but I'll look more deeply. that might be either possible to work around or extensible in some way. I also realize I've been making allusions to our execution model but there isn't any good description of the constraints of it (and justifications for it) anywhere except our in-submission papers. So, I'll write something up (blog-post-style) to share with you all to try and be more transparent about where we're coming from. |
Hello embedded developers. As you know a part of the futures crate is Would you consider using the |
I think with projects like RTFM and Embrio-rs, as well as waiting for upstream Rust to finalize Futures, async/await, Pin, and others, there is not too much more to capture here. I would nominate closing this issue, unless we think it makes sense to focus on some particular goal. Marking for cleanup. |
I am closing this issue, please feel free to open another issue if you would like this discussed further. |
Sure. I'll just link to a few examples of no_std future executors in case someone is interested: https://github.com/Nemo157/embrio-rs/ |
I'd like to discuss the merits of async IO style concurrency versus preemptive tasks in embedded applications. Rust may make it easier to write async IO style concurrent embedded applications. See futuro for an example of using a
Future
trait in an embedded context. But what's the benefit of this style of concurrency vs preemptive tasks (e.g. FreeRTOS)?My understanding is that on an OS, async IO is preferred for applications with large numbers of blocking tasks because userspace context switching is cheaper than OS context switching. But in an embedded application, there is no OS, so context switching should be just as expensive whether you're using preemptive tasks or callback based co-routines (async IO).
So is there a benefit to writing an embedded application using async IO (presumably using a
Future
like trait) over preemptive tasks? I've never actually written a concurrent application using async IO, so maybe I'm missing something obvious. Is it significantly more ergonomic? Does it save on stack space? What sort of applications would benefit from being written using async IO?The text was updated successfully, but these errors were encountered: