-
Notifications
You must be signed in to change notification settings - Fork 44
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Merge branch 'main' of https://github.com/lucalavezzo/project_database
- Loading branch information
Showing
2 changed files
with
4 additions
and
4 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,5 +1,5 @@ | ||
--- | ||
name: CI/CD and Automation of Manual Operations | ||
name: CI/CD and Automation of Manual Operations | ||
postdate: 2024-01-30 | ||
categories: | ||
- Computing | ||
|
@@ -35,4 +35,4 @@ contacts: | |
- name: Zhangqier Wang | ||
email: [email protected] | ||
- name: Dmytro Kovalskyi | ||
email: [email protected] | ||
email: [email protected] |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -21,7 +21,7 @@ program: | |
- IRIS-HEP fellow | ||
shortdescription: Develop a tool to monitor and make smart decisions on how to retry CMS grid jobs. | ||
description: > | ||
The CMS experiment runs its data processing and simulation jobs on the Worldwide LHC Computing Grid in the scale of ~100k jobs in parallel. It’s inevitable to avoid job failures on this scale, and thus it’s crucial to have an effective failure recovery system. The existing algorithm is agnostic to the information of other jobs which run at the same site or belong to the same physics class. The objective of this project is to develop a tool which will monitor all the CMS grid jobs and make smart decisions on how to retry them by aggregating the data coming from different jobs across the globe. Such decisions can potentially be: reducing the job submission to computing sites experiencing particular failures, changing the job configuration in case of inaccurate configurations, and not retrying potentially ill-configured jobs. This project has the potential to significantly improve efficiency of the whole CMS computing grid, reducing the wasted cpu cycles and increasing the overall throughput. | ||
The CMS experiment runs its data processing and simulation jobs on the Worldwide LHC Computing Grid in the scale of ~100k jobs in parallel. It’s inevitable to avoid job failures on this scale, and thus it’s crucial to have an effective failure recovery system. The existing algorithm is agnostic to the information of other jobs which run at the same site or belong to the same physics class. The objective of this project is to develop a tool which will monitor all the CMS grid jobs and make smart decisions on how to retry them by aggregating the data coming from different jobs across the globe. Such decisions can potentially be: reducing the job submission to computing sites experiencing particular failures, changing the job configuration in case of inaccurate configurations, and not retrying potentially ill-configured jobs. This project has the potential to significantly improve efficiency of the whole CMS computing grid, reducing the wasted cpu cycles and increasing the overall throughput. | ||
contacts: | ||
- name: Hassan Ahmed | ||
email: [email protected] | ||
|
@@ -34,4 +34,4 @@ contacts: | |
- name: Zhangqier Wang | ||
email: [email protected] | ||
- name: Dmytro Kovalskyi | ||
email: [email protected] | ||
email: [email protected] |