Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Generators in RTFM, short progress report. #18

Open
perlindgren opened this issue Sep 26, 2019 · 3 comments
Open

Generators in RTFM, short progress report. #18

perlindgren opened this issue Sep 26, 2019 · 3 comments

Comments

@perlindgren
Copy link
Contributor

Jorge and I have been discussing the potential use of generators back and forth over quite some time. Finally, we sat down to bite the bullet, and here are some first experiences.

Some observations:

  • Generators depend on quite a few experimental features if using them outside of the compiler (if I understand correctly they are used internally for the implementation of async/await, but the user surfacing API is not stabelized.)
  • Generators can indeed be used on core only systems. We can even put generators in static memory as below:
  • Generators stored in static can capture only static memory. (The param x, in the example).
  • Generators can atm not be resumed with an argument.
#![feature(generator_trait)]
#![feature(generators)]
#![feature(never_type)]
#![feature(type_alias_impl_trait)]

use core::{mem::MaybeUninit, ops::Generator, pin::Pin};

#[rustfmt::skip]
type G = impl Generator<Yield = (), Return = !>;

fn task(x: &'static mut u32) -> G {
    move || loop {
        println!("Hello {}", &x);
        *x += 1;
        yield;
        println!("World {}", &x);
        yield;
    }
}

static mut X: MaybeUninit<G> = MaybeUninit::uninit();

fn main() {
    unsafe {
        static mut x: u32 = 0;
        X.as_mut_ptr().write(task(&mut x));
        let g: &mut dyn Generator<Yield = (), Return = !> =
            &mut *X.as_mut_ptr();
        Pin::new_unchecked(&mut *g).resume();
        Pin::new_unchecked(&mut *g).resume();
    }
}

So let's go, the main idea:
We want to be able to write sequences (linear code), that can yield and resume where left off. Under the task/resource model of RTFM that is Ok as long as resources are not locked at the point of yielding (holding a resource would imply that the system ceiling (BASEPRI) is held at the level of the held resource, thus that would essentially blocking other tasks from executing). Luckily, the closure based resource access of RTFM will prevent code from yielding inside of a lock.

Here is a snippet of hand written code for a task (after running the RTFM proc-macro).

type Generatorfoo1= impl Generator<Yield = (), Return = !>;
static mut GENERATOR_FOO1: core::mem::MaybeUninit<Generatorfoo1> =
    core::mem::MaybeUninit::uninit();
#[allow(non_snake_case)]
fn foo1(mut ctx: foo1::Context) -> Generatorfoo1 {
    move || loop {
        ctx.resources.shared2.lock(|v| {
            *v += 1;
        });
        yield;
        ctx.resources.shared2.lock(|v| {
           ....
        });
        yield;
        ...
    }
}

The Context/Resource proxy was hand written for this small example, but shows the proof of concept to work. I have verified that the locking prevents tasks from being executed properly. The implementation of Priority was changed to an owned Cell instead of a reference to make it work in static. This seems to impede the ability for the compiler to optimize out unnecessary locks (it will amount to a comparison and a branch as OH, and some extra non-needed code). I assume the reason is that its stored in static and thus the compiler cannot assume exclusive access the the Cell's inner value.

So in conclusion, we can have it working one way or another, under rfc #17, the implementation may be simplified. The problem with lock optimization needs some love. If optimization is not possible, the OH is acceptable if applied only to the case of locking inside of generators. (In a prior version of RTFM, the current ceiling value was passed around to achieve this lock optimization, that is always an option but a bit verbose...)

Some caveats:

  • We cannot use the message passing API due to lack of resume parameters. The user would need to setup a SPSC queue (similar to a legacy threads based RTOS for passing data).
  • Just a proof of concept as of yet, implementation pending.

Why?:
So why the interest in generators. Well they offer the sequential style programming, essentially a state machine, suitable to implement transactions. Secondly, this may open up for async/await under RTFM (we have the dispatchers/tasks, we have generators, what's left ...)

Thanks to Jorge for numerous discussions and code sketches.

Please use this issue for discussions/ideas/and progress reporting on generators under RTFM.

@perlindgren
Copy link
Contributor Author

By the way, the experiment reveals a nasty little bug in rustfmt, requiring #[rustfmt::skip]. And it also reveals a nasty little bug in cargo expand. The #[rustfmt::skip] seems to be ignored, and the code still gets erroneously formatted. (It could be since the notation is experimental, so its not a big deal.)

@perlindgren
Copy link
Contributor Author

perlindgren commented Oct 29, 2019

The rustfmt bug seems to be fixed on the latest nightly.

@perlindgren
Copy link
Contributor Author

Jorge has just contributed with a prototype implementation (great work). However, locks cannot be optimized out (instead there will be a BASEPRI read always, and a BASEPRI_MAX for writing). This since we cannot capture the Priority (Cell reference) inside of the generator. Question is if that can be solved (or if its even worth the effort). If we get resume arguments, it might be possible to pass in the whole context (for each resume point), but currently Rust does not support this. This might be something for the Embedded WG to push (resume arguments) for compiler team in 2019. In the meantime, perhaps its possible to use return arguments for the Priority (Cell), and let that be set by the task caller. I doubt that it would allow the compiler to do much of optimization, since it would go through a static. (Not even sure if this is possible at all...)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant