-
Notifications
You must be signed in to change notification settings - Fork 2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Run job deregistering in a single transaction #4861
Conversation
Fixes #4299 Upon investigating this case further, we determined the issue to be a race between applying `JobBatchDeregisterRequest` fsm operation and processing job-deregister evals. Processing job-deregister evals should wait until the FSM log message finishes applying, by using the snapshot index. However, with `JobBatchDeregister`, any single individual job deregistering was applied accidentally incremented the snapshot index and resulted into processing job-deregister evals. When a Nomad server receives an eval for a job in the batch that is yet to be deleted, we accidentally re-run it depending on the state of allocation. This change ensures that we delete deregister all of the jobs and inserts all evals in a single transactions, thus blocking processing related evals until deregistering complete.
I have tested this change with my re-production setup and didn't notice a case of a job retrying even after 10 hours - previously a case would appear in ~5-10 minutes. So success?! |
LGTM after comments |
a422d8c
to
9405473
Compare
@notnoop after we backport this to 0.8.6 we begin see time to time when we manual make GC via
after that nomad server cluster fully demaged, and stops functional properly |
on our test stand we launch folow jobs
and
|
before cluster crash the state of test job was
|
We absolutly sure that made backport right |
First of all we compare our version with 0.8.7 branch, and also we can provide our patch |
Also i must said that this not 100% reproducible and we catched this only two(2) times |
@tantra35 Thank you so much for reporting the issue and providing a test case. I'll dig into this tomorrow (on Monday) and follow up. |
@notnoop thanks fo reaction, i also must said that i'm not sure that the reason in exactly this PR, because our version, also include backported, but when we test them, we doesn;t see any issues. Its seems that this happens when we wait when batch job succsefully ends, then we launch it again, so our steps is follow
|
@notnoop We tested this patch and seems all worked well, now we can't launch the same job if it previous version in dead state, only if we make GC, nomad allow to us launch new |
@tantra35 re: we can't launch the same job if it previous version in dead state, only if we make GC, nomad allow to us launch new if jobs are resubmitted with a change it should run without needing a GC. If a batch job's allocs are all complete and you resubmit it with no changes Nomad will not rerun completed allocations. |
@preetapan before @notnoop made final fixes, we had ability to launch batch job which doesn't changed, and you can saw this on provided |
That's interesting - I wasn't able to reproduce it with different nomad releases. I captured my output for Nomad 0.8.2 for example https://gist.github.com/notnoop/151105c99070d93333bed23aec7ce42c - and you can see that resubmitting the same job doesn't trigger a new allocation (i.e. doesn't run again) until the job is modified. If you'd like to investigate this further, please open a new ticket with the version of Nomad you are using and the commands you ran, and we'd follow up there - I'm afraid of losing valuable insights and issues in PR comments. Thanks! |
@notnoop this 100% respodisible in case which i descibed in #4532, yes there we manualy stop batch jobs. And also i mast mansio i see gain that i can launch
|
Here we can see that version of allocation changes, but actual job description not |
This happens randomly, for test vagrant environment |
I'm going to lock this pull request because it has been closed for 120 days ⏳. This helps our maintainers find and focus on the active contributions. |
Fixes #4299
Upon investigating this case further, we determined the issue to be a race between applying
JobBatchDeregisterRequest
fsm operation and processing job-deregister evals.Processing job-deregister evals should wait until the FSM log message finishes applying, by using the snapshot index. However, with
JobBatchDeregister
, any single individual job deregistering was applied accidentally incremented the snapshot index and resulted into processing job-deregister evals. When a Nomad server receives an eval for a job in the batch that is yet to be deleted, we accidentally re-run it depending on the state of allocation.This change ensures that we delete deregister all of the jobs and inserts all evals in a single transactions, thus blocking processing related evals until deregistering complete.