-
Notifications
You must be signed in to change notification settings - Fork 52
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Providing an ABT_thread_sleep #29
Comments
Sounds interesting so I will leave the issue open. |
Yes it's always possible to do an active loop but that consumes CPU. Any possibility to do it without an active loop? |
If you |
I currently encounter this issue as well and I also ended up using I was wondering though when the yield-loop could become an issue. Let's say an ABT_pool only uses one execution stream and there is a large number of ULTs in that pool which run in the yield loop. Further, a small number of ULTs are not running in the yield loop. Wouldn't this mean that the scheduler is more likely to pick another ULT which is just going to yield again, and therefore delaying those ULTs that are not running in the yield loop? I guess my question is if the scheduler picks ULTs randomly on yield? |
The scheduler picks the ULT returned by ABT_pool_pop() call on the scheduler's pool from inside the scheduler. So taking a FIFO pool implementation, when the ULT in the active loop yields, it is pushed back at the end of the FIFO queue, and is only scheduled again once the other ULTs in the pool have been given a chance to execute. I think you're right that if many ULTs are un an active yield loop waiting simulating a sleep, then the scheduler will go through potentially many of them before finding a ULT that can actually do useful work. |
It would be great to have an ABT_thread_sleep(double timeout_ms) function that puts the calling ABT_thread to sleep for a given amount of time (or "at least the given amount of time", since other ULTs may be running when the timeout has passed).
The text was updated successfully, but these errors were encountered: