-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add serialised data to ci #338
Changes from all commits
146f4d1
efdcb6c
a449d62
95d4533
32d8eca
2af5052
7872ae3
390721c
f1fb468
38adcce
3520d13
ac95551
1584e2c
f576f48
43fa823
3b9904f
a22537d
3f68853
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,44 @@ | ||
repos: | ||
- repo: local | ||
hooks: | ||
- id: run-common-precommit | ||
name: Run Model Common Pre-commit | ||
entry: pre-commit run --config model/common/.pre-commit-config.yaml --all-files | ||
language: system | ||
pass_filenames: false | ||
always_run: true | ||
|
||
- id: run-driver-precommit | ||
name: Run Model Driver Pre-commit | ||
entry: pre-commit run --config model/driver/.pre-commit-config.yaml --all-files | ||
language: system | ||
pass_filenames: false | ||
always_run: true | ||
|
||
- id: run-atmosphere-advection-precommit | ||
name: Run Model Atmosphere Advection Pre-commit | ||
entry: pre-commit run --config model/atmosphere/advection/.pre-commit-config.yaml --all-files | ||
language: system | ||
pass_filenames: false | ||
always_run: true | ||
|
||
- id: run-atmosphere-diffusion-precommit | ||
name: Run Model Atmosphere Diffusion Pre-commit | ||
entry: pre-commit run --config model/atmosphere/diffusion/.pre-commit-config.yaml --all-files | ||
language: system | ||
pass_filenames: false | ||
always_run: true | ||
|
||
- id: run-atmosphere-dycore-precommit | ||
name: Run Model Atmosphere Dycore Pre-commit | ||
entry: pre-commit run --config model/atmosphere/dycore/.pre-commit-config.yaml --all-files | ||
language: system | ||
pass_filenames: false | ||
always_run: true | ||
|
||
- id: run-tools-precommit | ||
name: Run Tools Pre-commit | ||
entry: pre-commit run --config tools/.pre-commit-config.yaml --all-files | ||
language: system | ||
pass_filenames: false | ||
always_run: true |
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -34,13 +34,16 @@ variables: | |
- pyversion_no_dot="${PYTHON_VERSION//./}" | ||
- pip install tox clang-format | ||
- python -c "import cupy" | ||
- ls ${SERIALIZED_DATA_PATH} | ||
variables: | ||
SLURM_JOB_NUM_NODES: 1 | ||
SLURM_NTASKS: 1 | ||
SLURM_TIMELIMIT: '06:00:00' | ||
CRAY_CUDA_MPS: 1 | ||
NUM_PROCESSES: auto | ||
VIRTUALENV_SYSTEM_SITE_PACKAGES: 1 | ||
CSCS_NEEDED_DATA: icon4py | ||
TEST_DATA_PATH: "/apps/daint/UES/jenkssl/ciext/icon4py" | ||
|
||
build_job: | ||
extends: .build_template | ||
|
@@ -49,14 +52,14 @@ test_model_job_roundtrip_simple_grid: | |
extends: .test_template | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Just leaving my 2 cents here: All of these jobs could be easily expressed using https://docs.gitlab.com/ee/ci/yaml/#needsparallelmatrix There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I will try this in a new PR. |
||
stage: test | ||
script: | ||
- tox -r -c model/ --verbose -- --benchmark-skip -n auto | ||
- tox -r -e run_stencil_tests -c model/ --verbose | ||
|
||
test_model_job_dace_cpu_simple_grid: | ||
extends: .test_template | ||
stage: test | ||
script: | ||
- pip install dace==$DACE_VERSION | ||
- tox -r -e stencil_tests -c model/ --verbose -- --benchmark-skip -n auto --backend=dace_cpu | ||
- tox -r -e run_stencil_tests -c model/ --verbose -- --backend=dace_cpu | ||
only: | ||
- main | ||
allow_failure: true | ||
|
@@ -66,7 +69,7 @@ test_model_job_dace_gpu_simple_grid: | |
stage: test | ||
script: | ||
- pip install dace==$DACE_VERSION | ||
- tox -r -e stencil_tests -c model/ --verbose -- --benchmark-skip -n auto --backend=dace_gpu | ||
- tox -r -e run_stencil_tests -c model/ --verbose -- --backend=dace_gpu | ||
only: | ||
- main | ||
allow_failure: true | ||
|
@@ -75,48 +78,48 @@ test_model_job_gtfn_cpu_simple_grid: | |
extends: .test_template | ||
stage: test | ||
script: | ||
- tox -r -e stencil_tests -c model/ --verbose -- --benchmark-skip -n auto --backend=gtfn_cpu | ||
- tox -r -e run_stencil_tests -c model/ --verbose -- --backend=gtfn_cpu | ||
|
||
test_model_job_gtfn_gpu_simple_grid: | ||
extends: .test_template | ||
stage: test | ||
script: | ||
- tox -r -e stencil_tests -c model/ --verbose -- --benchmark-skip -n auto --backend=gtfn_gpu | ||
- tox -r -e run_stencil_tests -c model/ --verbose -- --backend=gtfn_gpu | ||
|
||
test_tools_job: | ||
extends: .test_template | ||
stage: test | ||
script: | ||
- tox -r -c tools/ --verbose | ||
|
||
benchmark_model_dace_cpu_simple_grid: | ||
benchmark_model_dace_cpu_icon_grid: | ||
extends: .test_template | ||
stage: benchmark | ||
script: | ||
- pip install dace==$DACE_VERSION | ||
- tox -r -e stencil_tests -c model/ --verbose -- --benchmark-only --backend=dace_cpu --grid=simple_grid | ||
- tox -r -e run_benchmarks -c model/ -- --backend=dace_cpu --grid=icon_grid | ||
only: | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I was wondering: does our (don't know how it works and currentlyit runs always all of the jobs and the benchmarks take quite long... once we add the There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I am not sure to be honest since @edopao added these dace jobs, maybe he can explain more. I would assume these benchmarks run only on There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. As commented in today's standup meeting, the intention of this setting was to run the dace benchmark on main after PR is merged. However, this setting is ignored in our setup, as also noted above. I agree that we could have a separate CI pipeline for benchmarking, automatically triggered after PR is merged or by a daily job. |
||
- main | ||
when: manual | ||
|
||
benchmark_model_dace_gpu_simple_grid: | ||
benchmark_model_dace_gpu_icon_grid: | ||
extends: .test_template | ||
stage: benchmark | ||
script: | ||
- pip install dace==$DACE_VERSION | ||
- tox -r -e stencil_tests -c model/ --verbose -- --benchmark-only --backend=dace_gpu --grid=simple_grid | ||
- tox -r -e run_benchmarks -c model/ -- --backend=dace_gpu --grid=icon_grid | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. do you need this double double-dashes? Or did you simply forget to delete? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Yes the double dashes are used to denote the end of arguments passed to tox itself, and that any subsequent arguments are to be treated as positional arguments passed to whatever command tox invokes, in this case |
||
only: | ||
- main | ||
when: manual | ||
|
||
benchmark_model_gtfn_cpu_simple_grid: | ||
benchmark_model_gtfn_cpu_icon_grid: | ||
extends: .test_template | ||
stage: benchmark | ||
script: | ||
- tox -r -e stencil_tests -c model/ --verbose -- --benchmark-only --backend=gtfn_cpu --grid=simple_grid | ||
- tox -r -e run_benchmarks -c model/ -- --backend=gtfn_cpu --grid=icon_grid | ||
|
||
benchmark_model_gtfn_gpu_simple_grid: | ||
benchmark_model_gtfn_gpu_icon_grid: | ||
extends: .test_template | ||
stage: benchmark | ||
script: | ||
- tox -r -e stencil_tests -c model/ --verbose -- --benchmark-only --backend=gtfn_gpu --grid=simple_grid | ||
- tox -r -e run_benchmarks -c model/ -- --backend=gtfn_gpu --grid=icon_grid |
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -81,7 +81,12 @@ def reference( | |
return dict(theta_v=theta_v, exner=exner) | ||
|
||
@pytest.fixture | ||
def input_data(self, grid): | ||
def input_data(self, grid, uses_icon_grid_with_otf): | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I have merged the verification of the global (EXCLAIM Aquaplanet) run, that means there is an additional serialized dataset (which for the There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Ok sounds good, let's discuss it tomorrow, uploading to the server should be relatively straightforward, Andreas can help us. |
||
if uses_icon_grid_with_otf: | ||
pytest.skip( | ||
"Execution domain needs to be restricted or boundary taken into account in stencil." | ||
) | ||
|
||
kh_smag_e = random_field(grid, EdgeDim, KDim) | ||
inv_dual_edge_length = random_field(grid, EdgeDim) | ||
theta_v_in = random_field(grid, CellDim, KDim) | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nice...