-
Notifications
You must be signed in to change notification settings - Fork 31
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Issue 51 Unwind error if the Coroutine::yield_with is inlined #52
base: master
Are you sure you want to change the base?
Conversation
src/coroutine.rs
Outdated
ctx.resume_ontop(self.0 as *mut _ as usize, coroutine_unwind); | ||
self.unwind_flag.store(true, ::std::sync::atomic::Ordering::Release); | ||
//ctx.resume_ontop(self.0 as *mut _ as usize, coroutine_unwind); | ||
ctx.resume(self.0 as *mut _ as usize); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is the point where previously we passed the self
pointer as the data
field to the coroutine_unwind
method. Now you could for instance resume the execution of the coroutine with a custom value like usize::max_value()
right here. The Coroutine will resume where it left off and that's the yield_with()
method.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, you can pass a specific value to the data
field to indicate that you are going to do force unwinding. The data
will be returned from the resume
in yield_with
.
Also, you can use the state
field, for instance, you can add a State::ForceUnwind
and set it to state
before this ctx.resume
right here, and check the self.state
in yield_with
.
BTW: There is no need to use atomic flag here, because it is impossible to access one Coroutine
from multiple threads, which is ensured by the other parts of coio
.
Theoretically, we don't need to check it here, because it is impossible to trigger any unwinding process before |
… at coroutine::Handle::drop.
I tried simply checking for It breaks somewhere and I'd like to have the logger working to figure where. |
@glasswings it does not work because you'd need to check for |
I see. I misunderstood the topology. It's not one Processor context which activates each Coroutine in turn; there's a ring of running Coroutines. The It (may?) also mean the
The function of I think you're saying, and I'll next try:
Maybe we can save the middle two context switches
|
@glasswings How's it going? Should one of us take a look? We could continue development of this as well if something is left. And if you want to: My offer of introducing you how all of this works is still there. 😄 |
I'm feeling pretty happy with the code as it is. It would be nice to pass the network echo tests, too, but I don't think I broke them. The added comments should definitely be reviewed to be sure they are not misleading. They are a little too verbose and I wouldn't be offended if they're edited down. All-in-all it's ready for a look. I pushed the faster coroutine teardown (two fewer context switches) to my fork. It seems to work just as well at first glance, but I'm not likely to mess around with it much further. My roommate has expressed renewed interest in learning programming, and I'll be shifting my time to things we can work on together. So I'm planning to see this issue through to resolution and, if it's not too frustrating, troubleshoot why the network echo tests aren't working on MSYS/Win10. Then moving on to other things. Thank you both very much for making me feel welcome. |
coroutine_unwind
was doing undefined things.I've replaced it with an AtomicBool flag triggering the same unwind code moved to
yield_with
. This isn't optimal yet (I'll try to get it passing through thedata
field as zonyitoo suggests) but it does pass all tests even with#[inline(always)]
Still needs:
data
field