Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

【chunkserver & ut】split read-write thread and add localut.sh #75

Merged
merged 2 commits into from
Sep 23, 2020

Conversation

ilixiaocui
Copy link
Contributor

@ilixiaocui ilixiaocui commented Aug 24, 2020

What problem does this PR solve?

  1. add new localut.sh to run ut concurrent at local workspace
  2. add read thread in ConcurrentApplyModule to split read-and-write

Issue Number: no releated

Problem Summary:

What is changed and how it works?

What's Changed:

How it Works:

Side effects(Breaking backward compatibility? Performance regression?):

Check List

  • Relevant documentation/comments is changed or added
  • I acknowledge that all my contributions will be made under the project's license

@ilixiaocui ilixiaocui changed the title 【chunkserver & ut】 【chunkserver & ut】split read-write thread and add localut.sh Aug 24, 2020
@ilixiaocui ilixiaocui force-pushed the master branch 10 times, most recently from 19f7c1d to f874e36 Compare August 26, 2020 07:12
coverage/test.py Outdated
@@ -0,0 +1,20 @@
#!/usr/bin/env python
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

test.py和tmp.py是不是多余的?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

* @param[in] args: param to excute task
*/
template<class F, class... Args>
bool Push(uint64_t key, CHUNK_OP_TYPE optype, F&& f, Args&&... args) {
Copy link
Contributor

@wu-hanqing wu-hanqing Sep 16, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

optype 这个参数没有注释

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

namespace chunkserver {
namespace concurrent {

enum ThreadPoolType {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

enum class

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done


void InitThreadPool(ThreadPoolType type, int concorrent, int depth);

inline int Hash(uint64_t key, int concurrent) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这个inline是多余的

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

* @param[in] rconcurrentsizee: num of read threads
* @param[in] wqueuedephth: depth of read queue in every thread
*/
bool Init(int wconcurrentsize, int wqueuedepth,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

可以考虑把参数封装成一个option

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

wapplyMap_[i]->th = std::move(
std::thread(&ConcurrentApplyModule::Run, this, type, i));
break;
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这里的std::move是多余的

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

}
}

void ConcurrentApplyModule::Stop() {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这里的Stop没有等待TaskQueue中的任务全部执行完成,就直接退出了。
可以不用处理这种场景吗?
或者退出的时候,不会出现队列中还有任务的情况?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这里stop的时候不需要等任务完成,如果有需求在stop之前需要调用flush。

WRITE
};

class CURVE_CACHELINE_ALIGNMENT ConcurrentApplyModule {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

读和写的逻辑几乎是完全一样的,这里的代码也是读写各复制了一份,为什么不复用同一个ConcurrentApplyModule,然后读和写是这个类的两个对象,而不是在一个类中实现读和写。读pool和写pool仅仅是线程数不一样而已啊。

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

那外面的对象数会增加,要维护的代码会散落在各处

void ConcurrentApplyModule::Flush() {
std::atomic<bool>* signal = new (std::nothrow) std::atomic<bool>[wconcurrentsize_]; //NOLINT
std::mutex* mtx = new (std::nothrow) std::mutex[wconcurrentsize_];
std::condition_variable* cv= new (std::nothrow) std::condition_variable[wconcurrentsize_]; //NOLINT
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这个地方可以改成CountDown优化下,而且在同一个函数中delete了就不需要new,直接临时对象就可以了吧

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

@ilixiaocui ilixiaocui force-pushed the master branch 3 times, most recently from c425458 to 0f7b289 Compare September 21, 2020 07:50
@ilixiaocui ilixiaocui force-pushed the master branch 4 times, most recently from 7275622 to 97dc3ed Compare September 22, 2020 07:58
@ilixiaocui ilixiaocui force-pushed the master branch 3 times, most recently from 6a793c6 to 4bac924 Compare September 22, 2020 12:46
@YunhuiChen YunhuiChen merged commit fe19779 into opencurve:master Sep 23, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants