-
Notifications
You must be signed in to change notification settings - Fork 6.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
reschedule not done after mutex unlock #20802
Comments
|
Indeed, there's no reschedule done inside k_mutex_unlock(). I'm not sure this is a bug, though. Note that k_sem_give() has exactly the same behavior, as does k_poll_signal_raise() and a bunch of other "write-like" IPC APIs we have. But not all: I note that an unblocking insert to a queue (e.g. k_lifo/k_fifo) does do a synchronous reschedule. We've never been good about documenting what our reschedule points are. As far as k_mutex... I suspect the reason it's done this way is a performance thing. There's an existing controversy in the industry about "fair" locks. IIRC, the last time this came up it was pointed out that default POSIX mutexes in OS X are fair (i.e. have the behavior you want, and always yield if needed) and Linux/glibc ones are not, which is why they can minimize the number of context switches and thus blow Apple away in routine benchmarks. I officially take no position here. If there's desire for the feature, this small patch should work to do it. It doesn't break existing tests, but I haven't tried it with your example above yet. (Note that it also fixes what seem like really obvious races here -- it's changing mutex state after readying the thread, which means that an SMP core or interrupt might run it before the mutex is correct. I'll submit those separately once we decide what to do here.) --- a/kernel/mutex.c
+++ b/kernel/mutex.c
@@ -233,18 +233,15 @@ void z_impl_k_mutex_unlock(struct k_mutex *mutex)
mutex, new_owner, new_owner ? new_owner->base.prio : -1000);
if (new_owner != NULL) {
- z_ready_thread(new_owner);
-
- k_spin_unlock(&lock, key);
-
- arch_thread_return_value_set(new_owner, 0);
-
/*
* new owner is already of higher or equal prio than first
* waiter since the wait queue is priority-based: no need to
* ajust its priority
*/
mutex->owner_orig_prio = new_owner->base.prio;
+ arch_thread_return_value_set(new_owner, 0);
+ z_ready_thread(new_owner);
+ z_reschedule(&lock, key);
} else {
mutex->lock_count = 0U;
k_spin_unlock(&lock, key); |
my vote is to have a reschedule point here. k_mutex should be considered as a special case like Linux RT-Mutex, aside from other IPC which does more traditional wait/wake semantics and aren't supposed to be priority inheriting. |
Lowering priority. Based on @andyross evaluation, even considering this a medium bug may be too much. |
The k_mutex is a priority-inheriting mutex, so on unlock it's possible that a thread's priority will be lowered. Make this a reschedule point so that reasoning about thread priorities is easier (possibly at the cost of performance): most users are going to expect that the priority elevation stops at exactly the moment of unlock. Note that this also reorders the code to fix what appear to be obvious race conditions. After the call to z_ready_thread(), that thread may be run (e.g. by an interrupt preemption or on another SMP core), yet the return value and mutex weren't correctly set yet. The spinlock was also prematurely released. Fixes zephyrproject-rtos#20802 Signed-off-by: Andy Ross <[email protected]>
While this is currently a question it is blocking for #18970 as described in #20919 (review): until it's answered I can't identify the conditions under which a cooperative thread may transition from running to another state. |
I think we're converging to a consensus that this is working as designed for co-op threads (even if we add a z_reschedule(), it's a no-op if the caller is coop) but definitely broken for preempt threads. |
The k_mutex is a priority-inheriting mutex, so on unlock it's possible that a thread's priority will be lowered. Make this a reschedule point so that reasoning about thread priorities is easier (possibly at the cost of performance): most users are going to expect that the priority elevation stops at exactly the moment of unlock. Note that this also reorders the code to fix what appear to be obvious race conditions. After the call to z_ready_thread(), that thread may be run (e.g. by an interrupt preemption or on another SMP core), yet the return value and mutex weren't correctly set yet. The spinlock was also prematurely released. Fixes #20802 Signed-off-by: Andy Ross <[email protected]>
Describe the bug
I was preparing some training material and noticed behavior that I believe is a bug.
I have threads. Two are cooperative. The higher priority one is waiting for mutex. The lower priority one is holding the mutex. After mutex is release low priority one continues execution until sleep. If mutex is held by preemptive thread releasing the mutex cause immediate swap to high priority thread.
To Reproduce
Steps to reproduce the behavior:
Run simple code
Expected behavior
High priority thread should be swapped in as soon as lock is released. i.e. mutex_unlock must be a scheduling point!
Impact
Probably showstopper
Screenshots or console output
N/A
Environment (please complete the following information):
master
Additional context
N/A
The text was updated successfully, but these errors were encountered: