Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Seems starve writer @https://github.com/boostorg/thread/blob/boost-1.68.0/include/boost/thread/pthread/shared_mutex.hpp#L272 #265

Closed
DengZuoheng opened this issue Jan 8, 2019 · 6 comments

Comments

@DengZuoheng
Copy link

        void lock()
        {
#if defined BOOST_THREAD_PROVIDES_INTERRUPTIONS
            boost::this_thread::disable_interruption do_not_disturb;
#endif
            boost::unique_lock<boost::mutex> lk(state_change);
            state.exclusive_waiting_blocked=true;
            exclusive_cond.wait(lk, boost::bind(&state_data::can_lock, boost::ref(state)));
            state.exclusive=true;
        }

In last reader unlock_shared, the state.exclusive_waiting_blocked will set to false, if there are writer and reader waiting at that time and reader wake up firstly and get a read-lock, then the writer need to wait again, but the state.exclusive_waiting_blocked would not set to true again for waiting writer.

Would it starve writer?

@DengZuoheng DengZuoheng changed the title Seems starve wirter @https://github.com/boostorg/thread/blob/boost-1.68.0/include/boost/thread/pthread/shared_mutex.hpp#L272 Seems starve writer @https://github.com/boostorg/thread/blob/boost-1.68.0/include/boost/thread/pthread/shared_mutex.hpp#L272 Jan 8, 2019
@viboes
Copy link
Collaborator

viboes commented Mar 17, 2019

I'll need sometime to refresh my memory.

@austin-beer
Copy link
Contributor

If I understand correctly, here is the scenario that @DengZuoheng has described.

R1 and R2 are both reader threads. W is a writer thread.

  • R1 is holding a shared lock (it is the only thread holding a shared lock so shared_count is 1).
  • W wants to get an exclusive lock. It enters the lock() function, sets exclusive_waiting_blocked to true, and then waits on exclusive_cond.
  • R1 calls unlock_shared(). That sets shared_count to 0 and exclusive_waiting_blocked to false, and then calls notify_one() on exclusive_cond.
  • R2 wakes up and calls lock_shared(). Since exclusive_waiting_blocked is false, the call succeeds without blocking and R2 now holds a shared lock.
  • W wakes up due to the notify_one() call but sees that shared_count is still 1 and so keeps waiting, but without setting exclusive_waiting_blocked back to true. Writer starvation occurs.

I think this situation is possible. However, I think it is extremely unlikely due to the fact that it requires R2 to call lock_shared() and be scheduled by the OS right between the time R1 calls unlock_shared() and the time W wakes up.

Boost.Thread contains another implementation of shared_mutex.hpp that was originally created by Howard Hinnant to replace the existing pthread/shared_mutex.hpp. This implementation can be used by building Boost and your application with BOOST_THREAD_V2_SHARED_MUTEX. I believe this implementation is simpler and more robust than the existing implementation. However, without a set of tests to prove that, it's probably too risky to replace the existing implementation that has been in use for years.

This V2 implementation would behave as follows in the above scenario.

  • R1 is holding a shared lock (it is the only thread holding a shared lock so num_readers is 1).
  • W wants to get an exclusive lock. It enters the lock() function, sets write_entered_ to true, and then waits on gate2_.
  • R1 calls unlock_shared(). That sets num_readers to 0 and then calls notify_one() on gate2_.
  • R2 wakes up and calls lock_shared(). Since write_entered_ is still true, it blocks and starts waiting on gate1_.
  • W wakes up due to the notify_one() call and sees that num_readers is 0 and so it successfully takes the lock. No starvation occurs.

@viboes
Copy link
Collaborator

viboes commented Mar 24, 2019

I'm really sorry. I believed that BOOST_THREAD_V2_SHARED_MUTEX was defined when you define BOOST_THREAD_VERSION>=3 and it seems it is not the case.

Please, go ahead and try the V2 version by defining BOOST_THREAD_V2_SHARED_MUTEX and let us know if you are yet suffering from the same symptoms.

Thanks Austin for your help.

@DengZuoheng
Copy link
Author

Thanks @austin-beer and @viboes for help, I will try shared_mutex v2, I will open new issue if I still suffering same symptoms.

@AbdoSalem
Copy link

AbdoSalem commented Oct 5, 2021

Just to confirm that the situation described in #265 (comment) with the non v2 shared mutex, is reproducible. We have a situation where a unique lock starves in ~ 1 out of 5 times. The situation we have that increases the likelihood of the occurence of this issue is as follows:

  • ~100 threads request shared locks all the time, due to internal logic they can wait for one another
  • another thread requests the unqiue lock, so it waits until the shared locks are done
  • the ~100 threads keep requesting the shared locks and they get it before the unique lock does, hence the unique lock starves.

This is just to reply to @austin-beer comment
I think this situation is possible. However, I think it is extremely unlikely due to the fact that it requires R2 to call lock_shared() and be scheduled by the OS right between the time R1 calls unlock_shared() and the time W wakes up.

The above scenario was reproduced using 1.70. Previously, we did not experience such an issue but then we were on 1.58. We see that the above bhavior has changed on 1.67 and unfortunately there was no mention of it in the release notes.

@JSoet
Copy link

JSoet commented Jun 20, 2022

I'm not sure, but I think the example mentioned here:
#361
Is the same issue, of writer starvation, and he included some sample code there to reproduce the problem. He also included a link to a commit which he said seems to have introduced the issue in boost 1.68

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants