From 2471778fd4055a67dabeb0cead5fdbb2c6c1bd4a Mon Sep 17 00:00:00 2001 From: "pre-commit-ci[bot]" <66853113+pre-commit-ci[bot]@users.noreply.github.com> Date: Tue, 30 Jan 2024 20:42:22 +0000 Subject: [PATCH] [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci --- projects/pnr-cicd-automation.yml | 4 ++-- projects/pnr-smart-job-retries.yml | 4 ++-- 2 files changed, 4 insertions(+), 4 deletions(-) diff --git a/projects/pnr-cicd-automation.yml b/projects/pnr-cicd-automation.yml index 2d4e29f..92916e2 100644 --- a/projects/pnr-cicd-automation.yml +++ b/projects/pnr-cicd-automation.yml @@ -1,5 +1,5 @@ --- -name: CI/CD and Automation of Manual Operations +name: CI/CD and Automation of Manual Operations postdate: 2024-01-30 categories: - Computing @@ -35,4 +35,4 @@ contacts: - name: Zhangqier Wang email: wangzqe@mit.edu - name: Dmytro Kovalskyi - email: kdv@mit.edu \ No newline at end of file + email: kdv@mit.edu diff --git a/projects/pnr-smart-job-retries.yml b/projects/pnr-smart-job-retries.yml index 1356c7b..3707bf2 100644 --- a/projects/pnr-smart-job-retries.yml +++ b/projects/pnr-smart-job-retries.yml @@ -21,7 +21,7 @@ program: - IRIS-HEP fellow shortdescription: Develop a tool to monitor and make smart decisions on how to retry CMS grid jobs. description: > - The CMS experiment runs its data processing and simulation jobs on the Worldwide LHC Computing Grid in the scale of ~100k jobs in parallel. It’s inevitable to avoid job failures on this scale, and thus it’s crucial to have an effective failure recovery system. The existing algorithm is agnostic to the information of other jobs which run at the same site or belong to the same physics class. The objective of this project is to develop a tool which will monitor all the CMS grid jobs and make smart decisions on how to retry them by aggregating the data coming from different jobs across the globe. Such decisions can potentially be: reducing the job submission to computing sites experiencing particular failures, changing the job configuration in case of inaccurate configurations, and not retrying potentially ill-configured jobs. This project has the potential to significantly improve efficiency of the whole CMS computing grid, reducing the wasted cpu cycles and increasing the overall throughput. + The CMS experiment runs its data processing and simulation jobs on the Worldwide LHC Computing Grid in the scale of ~100k jobs in parallel. It’s inevitable to avoid job failures on this scale, and thus it’s crucial to have an effective failure recovery system. The existing algorithm is agnostic to the information of other jobs which run at the same site or belong to the same physics class. The objective of this project is to develop a tool which will monitor all the CMS grid jobs and make smart decisions on how to retry them by aggregating the data coming from different jobs across the globe. Such decisions can potentially be: reducing the job submission to computing sites experiencing particular failures, changing the job configuration in case of inaccurate configurations, and not retrying potentially ill-configured jobs. This project has the potential to significantly improve efficiency of the whole CMS computing grid, reducing the wasted cpu cycles and increasing the overall throughput. contacts: - name: Hassan Ahmed email: m.hassan@cern.ch @@ -34,4 +34,4 @@ contacts: - name: Zhangqier Wang email: wangzqe@mit.edu - name: Dmytro Kovalskyi - email: kdv@mit.edu \ No newline at end of file + email: kdv@mit.edu