-
-
Notifications
You must be signed in to change notification settings - Fork 230
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Adding new choice to --on-error #1974
base: main
Are you sure you want to change the base?
Conversation
This pull request has been mentioned on Common Workflow Language Discourse. There might be relevant details there: https://cwl.discourse.group/t/how-to-fail-fast-during-parallel-scatter/868/5 |
cwltool/job.py
Outdated
nonlocal ks_tm | ||
if kill_switch.is_set(): | ||
_logger.error("[job %s] terminating by kill switch", self.name) | ||
if sproc.stdin: sproc.stdin.close() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This needs to be two lines, run make dev cleanup
(or just make cleanup
if you already run make dev
) to fix that automatically
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #1974 +/- ##
==========================================
- Coverage 83.81% 77.06% -6.76%
==========================================
Files 46 46
Lines 8262 8333 +71
Branches 2199 2120 -79
==========================================
- Hits 6925 6422 -503
- Misses 856 1350 +494
- Partials 481 561 +80 ☔ View full report in Codecov by Sentry. |
… runtimeContext.on_error = "kill", then the switch is activated. WorkflowKillSwitch is raised so it can be handled at the workflow and executor levels
…ch's status in the monitor function. The monitor function, up to this point, has been for gathering memory usage statistics via a timer thread. A second timer thread now monitors the kill switch.
…revent pending tasks from starting by simply draining the queue. This is a very loose policy, but since kill switch response is handled at the job level, any tasks that start after the kill switch is activated will take care of themselves and self terminate
… an executor. The workflow_eval_lock release had to be moved to the finally block in MultithreadedJobExecutor.run_jobs(). Otherwise, TaskQueue threads running MultithreadedJobExecutor._runner() will never join() because _runner() waits indefinitely for the workflow_eval_lock in its own finally block.
So that the runtime_context object can still be pickled. Other cleanups
…or-abort # Conflicts: # cwltool/errors.py
…askQueue. This helps to better synchronize the kill switch event and avoid adding/executing tasks after the switch has been set. This approach is tighter than my previous draft, but a race condition still exists where a task might be started after the kill switch has been set and announced. If this happens then the leaked job's monitor function will kill it and the subprocess' lifespan will be a maximum of the monitor's timer interval (currently 1 second). So when this rare event happens, the console output will be potentially confusing since it will show a new job starting after the kill switch has been announced.
… when exiting due to kill switch. Those actions have been placed under a `finally` block so that they are executed by both the "switching" job and the "responding" jobs. However, some of these post actions added a lot of redundant and unhelpful terminal output when handling jobs killed DUE TO the kill switch. The verbose output obscured the error's cause which isn't helpful. Two new process statuses have been added in order to better handle the event: - indeterminant: a default value for processStatus. - killed: the job was killed due to the kill switch being set. This approach also means that partial outputs aren't collected from jobs that have been killed.
1) Once a job has been terminated, all other parallel jobs should also terminate. In this test, the runtime of the workflow indicates whether the kill switch has been handled correctly. If the kill switch is successful then the workflow's runtime should be significantly shorter than sleep_time. 2) Outputs produced by a successful step should still be collected. In this case, the completed step is make_array. To be frank, this test could be simplified by using a ToolTimeLimit requirement rather than process_roulette.cwl
…to this issue. Other changes were offered by the tool, but they are outside the scope of this issue.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you, again, @AlexTate for your PR!
tests/test_parallel.py::test_on_error_kill
is unfortunately failing.
Summary
This pull request introduces a new choice,
kill
, for the--on-error
parameter.Motivation
There currently isn't a way to have cwltool immediately stop parallel jobs when one of them fails. One might expect
--on-error stop
to accomplish this, but the help string is specific and accurate: "do not submit any more steps". Since scatter and subworkflow are treated as single "steps" within the parent workflow, this means cwltool is not wrong to wait for the rest of the step's parallel jobs to finish when--on-error stop
. However, sometimes individual scatter jobs take a long time to complete, so if one of them fails early on, cwltool might wait great lengths of time for the other scatter jobs to complete before terminating the workflow. With--on-error kill
, all running jobs are quickly notified and self-terminate upon one job's failure.Demonstration of the Issue
When running the following workflow with
cwltool --parallel --on-error stop
, the total runtime is ~33 seconds despite one of the scatterstep tasks terminating unexpectedly. Ideally the workflow would terminate immediately.--on-error kill
accomplishes that.Forum Post
https://cwl.discourse.group/t/how-to-fail-fast-during-parallel-scatter/868
Concerns
workflow_eval_lock.release()
had to be moved to the finally block inMultithreadedJobExecutor.run_jobs()
JobBase._execute()
due toif runtimeContext.kill_switch.is_set(): return
? For that matter, shouldn't there be a finally block to contain some of these steps such as deleting runtime-generated files containing secrets?