Skip to content

Commit

Permalink
Browse files Browse the repository at this point in the history
  • Loading branch information
lucalavezzo committed Jan 31, 2024
2 parents 8428229 + 2471778 commit 9ca9e82
Show file tree
Hide file tree
Showing 2 changed files with 4 additions and 4 deletions.
4 changes: 2 additions & 2 deletions projects/pnr-cicd-automation.yml
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
---
name: CI/CD and Automation of Manual Operations
name: CI/CD and Automation of Manual Operations
postdate: 2024-01-30
categories:
- Computing
Expand Down Expand Up @@ -35,4 +35,4 @@ contacts:
- name: Zhangqier Wang
email: [email protected]
- name: Dmytro Kovalskyi
email: [email protected]
email: [email protected]
4 changes: 2 additions & 2 deletions projects/pnr-smart-job-retries.yml
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ program:
- IRIS-HEP fellow
shortdescription: Develop a tool to monitor and make smart decisions on how to retry CMS grid jobs.
description: >
The CMS experiment runs its data processing and simulation jobs on the Worldwide LHC Computing Grid in the scale of ~100k jobs in parallel. It’s inevitable to avoid job failures on this scale, and thus it’s crucial to have an effective failure recovery system. The existing algorithm is agnostic to the information of other jobs which run at the same site or belong to the same physics class. The objective of this project is to develop a tool which will monitor all the CMS grid jobs and make smart decisions on how to retry them by aggregating the data coming from different jobs across the globe. Such decisions can potentially be: reducing the job submission to computing sites experiencing particular failures, changing the job configuration in case of inaccurate configurations, and not retrying potentially ill-configured jobs. This project has the potential to significantly improve efficiency of the whole CMS computing grid, reducing the wasted cpu cycles and increasing the overall throughput.
The CMS experiment runs its data processing and simulation jobs on the Worldwide LHC Computing Grid in the scale of ~100k jobs in parallel. It’s inevitable to avoid job failures on this scale, and thus it’s crucial to have an effective failure recovery system. The existing algorithm is agnostic to the information of other jobs which run at the same site or belong to the same physics class. The objective of this project is to develop a tool which will monitor all the CMS grid jobs and make smart decisions on how to retry them by aggregating the data coming from different jobs across the globe. Such decisions can potentially be: reducing the job submission to computing sites experiencing particular failures, changing the job configuration in case of inaccurate configurations, and not retrying potentially ill-configured jobs. This project has the potential to significantly improve efficiency of the whole CMS computing grid, reducing the wasted cpu cycles and increasing the overall throughput.
contacts:
- name: Hassan Ahmed
email: [email protected]
Expand All @@ -34,4 +34,4 @@ contacts:
- name: Zhangqier Wang
email: [email protected]
- name: Dmytro Kovalskyi
email: [email protected]
email: [email protected]

0 comments on commit 9ca9e82

Please sign in to comment.