-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
refactor job manager part1 #3976
refactor job manager part1 #3976
Conversation
41e079b
to
0fa8935
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good Job, LGTM
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Generally LGTM.
} | ||
|
||
JobDescription jobDesc(nebula::value(jobId), cmd, paras); | ||
auto errorCode = jobMgr_->addJob(jobDesc, adminClient_); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The job manager will have one queue for one space later, right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, the later pr will do the following work:
Each space has a priority queue.
Jobs in the same space are executed serially according to priority.
Jobs in different spaces are executed in parallel according to the space.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
good job
What type of PR is this?
What problem(s) does this PR solve?
Issue(s) number:
Description:
refactor job manager, this pr mainly did the following parts
separation by space
Remove job_concurrency
Because only compact and flush, stats syntax supports job_concurrency, and job passes job_concurrency to task, but the corresponding task does not use job_concurrency.
Even using job_concurrency does not make much sense, because in a space, compactTask and flushTask are concurrent on the storage according to the number of engines. statsTask is concurrent according to the number of parts. FLAGS_max_concurrent_subtasks is enough
Fix bugs, such as Call thrift RunAdminJob to create a balance job, if space not existed, the job is still created #3743
How do you solve it?
Special notes for your reviewer, ex. impact of this fix, design document, etc:
Checklist:
Tests:
Affects:
Release notes:
Please confirm whether to be reflected in release notes and how to describe: