Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

SpinLock.POC #99

Closed
wants to merge 5 commits into from
Closed

SpinLock.POC #99

wants to merge 5 commits into from

Conversation

beef9999
Copy link

No description provided.

@CLAassistant
Copy link

CLA assistant check
Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution.


chenbo seems not to be a GitHub user. You need a GitHub account to be able to sign the CLA. If you have already a GitHub account, please add the email address used for this commit to your account.
You have signed the CLA already but the status is still pending? Let us recheck it.

@qicosmos
Copy link
Collaborator

you'd better add a new example for testing spinlock, don't modify the current case.
And you need to sign our Contributor License Agreement before we can accept your contribution.

@beef9999
Copy link
Author

... It's not a contribution. It's a demo to show the bug.

@RainMark
Copy link
Collaborator

RainMark commented May 13, 2022

... It's not a contribution. It's a demo to show the bug.

Ok, we will try to test it, and can you supply the backtrace of segment fault?

@ChuanqiXu9
Copy link
Collaborator

This might be a mis-use. We need to pass an executor in the start point of the async chain. But the diagnostic message from the library side is not friendly indeed. We would try to enhance it. Thanks for reporting.

char write_buf[max_length] = {"hello async_simple"};
char read_buf[max_length];
auto send = [&]() -> async_simple::coro::Lazy<> {
auto scopeLock = co_await g_wlock.coScopedLock();
Copy link
Collaborator

@RainMark RainMark May 13, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

coScopedLock need a executor to reschedule current coroutine to avoid the others be blocked, but this demo code use syncAwait to start a lazy task (without executor), so segment fault occurred

you can use ScopedSpinLock to avoid the coroutine yield or pass a executor to syncAwait

eg:

async_simple::coro::syncAwait(start(io_context, "127.0.0.1", "9980").via(executor));

@beef9999
Copy link
Author

I think what I need is a mutex(based on coroutine). I didn’t find it in the code.

@beef9999
Copy link
Author

And what kind of executor should I use there?

@RainMark
Copy link
Collaborator

I think what I need is a mutex(based on coroutine). I didn’t find it in the code.

ok, mutex is working on

@RainMark
Copy link
Collaborator

And what kind of executor should I use there?

just use async_simple::executors::SimpleExecutor for test

@beef9999
Copy link
Author

Is coScopedLock + executor sufficient to run my test? Because you said mutex is still under development … I’m a little confused

@RainMark
Copy link
Collaborator

Is coScopedLock + executor sufficient to run my test? Because you said mutex is still under development … I’m a little confused

yes

@beef9999
Copy link
Author

beef9999 commented May 15, 2022

I have updated the code and added the Executor, now this test case is able to run.

But as long as I increase the concurrency (coroutine number) at the client side, its QPS would continue to fall.

Concurrency QPS
1 167K
2 155K
8 97K
32 78K

@beef9999
Copy link
Author

Looks like the coScopedLock + executor is not working as expected as a mutex does. Something is blocked.

}
input.push_back(show_qps());
co_await async_simple::coro::collectAll(std::move(input));
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

collectAll run multi coroutines in same thread asynchronously, looks like you want all coroutines parallel on all thread, try to use collectAllPara

@ChuanqiXu9
Copy link
Collaborator

Thanks for reporting. I've got a similar results. And my conclusions are:

  1. SpinLock is not performance friendly. When I remove SpinLock and use atomic counter, the QPS wouldn't decrease if we increase the conccurency.
  2. SimpleExecutor is not good for performance too. We stated it is only used for testing in the docs. The direct impact would be: when we unlock a lock, we send the task to resuming a coroutine to the executor. But the executor wouldn't resume that immediately. The schedule strategy is simply random.

A condition variable here might be better.

@ChuanqiXu9
Copy link
Collaborator

@beef9999 Given there is no other comment. I am going close this one. Thanks for contributing.

@ChuanqiXu9 ChuanqiXu9 closed this May 18, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants