Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Sync <- Mlperf inference #507

Merged
merged 85 commits into from
Nov 8, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
85 commits
Select commit Hold shift + click to select a range
232a22c
initial commit - model download to host
anandhu-eng Nov 5, 2024
0564d6b
safe tensor path -> new env keys
anandhu-eng Nov 5, 2024
f3b6b6f
updated mounts for amd-llama2
anandhu-eng Nov 5, 2024
e743110
Updated env - llama2 model
anandhu-eng Nov 5, 2024
f10267f
handled model download in host
anandhu-eng Nov 5, 2024
c20c175
added compressed tensors module
anandhu-eng Nov 5, 2024
c60d748
Added compressed tensors support
anandhu-eng Nov 5, 2024
3228f88
bug fix
anandhu-eng Nov 5, 2024
09fc291
Added an option to pull inference src changes for mlperf-inference
arjunsuresh Nov 6, 2024
9c6f5de
Added an option to pull inference src changes for mlperf-inference
arjunsuresh Nov 6, 2024
76b9788
added docker run
anandhu-eng Nov 6, 2024
9f3b7d2
added action in the description-steps
anandhu-eng Nov 6, 2024
79d445e
fix typo
anandhu-eng Nov 6, 2024
aa1c68a
Merge branch 'mlcommons:mlperf-inference' into mlperf-inference
arjunsuresh Nov 6, 2024
423e692
Fix diffuser version for SDXL reference implementation
arjunsuresh Nov 6, 2024
244952d
Update float16 name for SDXL model
arjunsuresh Nov 6, 2024
d2041f3
Merge pull request #489 from mlcommons/anandhu-eng-patch-1
arjunsuresh Nov 6, 2024
79e7cb5
Use http link for intel conda packages
arjunsuresh Nov 6, 2024
2e5eaec
Merge branch 'mlcommons:mlperf-inference' into mlperf-inference
arjunsuresh Nov 6, 2024
52e8419
Merge pull request #491 from GATEOverflow/mlperf-inference
arjunsuresh Nov 6, 2024
81ff622
Dont use '-dt' for Nvidia ml-model-gptj
arjunsuresh Nov 6, 2024
83eb264
Merge branch 'mlperf-inference' into mlperf-inference
arjunsuresh Nov 6, 2024
6ef053c
Merge pull request #492 from GATEOverflow/mlperf-inference
arjunsuresh Nov 6, 2024
cd52f7a
Update starting weights filename for SDXL MLPerf inference
arjunsuresh Nov 6, 2024
4ff2b4d
Merge branch 'mlperf-inference' into mlperf-inference
arjunsuresh Nov 6, 2024
a139cb1
Merge pull request #494 from GATEOverflow/mlperf-inference
arjunsuresh Nov 6, 2024
0c2fb29
Update test-nvidia-mlperf-implementation.yml
arjunsuresh Nov 6, 2024
aa5bfac
Create test-mlperf-inference-intel
arjunsuresh Nov 6, 2024
c4e45d4
Stash changes in git pull repo script
arjunsuresh Nov 7, 2024
c80071f
Update test-scc24-sdxl.yaml
arjunsuresh Nov 7, 2024
45207c2
Temp fix: use inference dev branch for SDXL
arjunsuresh Nov 7, 2024
31534a9
Dont fail in pull-git-repo if there are local changes
arjunsuresh Nov 7, 2024
fb357c0
fix dependency for nvidia-mlperf-inference-gptj
arjunsuresh Nov 7, 2024
69daefe
Merge branch 'mlcommons:mlperf-inference' into mlperf-inference
arjunsuresh Nov 7, 2024
05f4bf8
Merge pull request #495 from GATEOverflow/mlperf-inference
arjunsuresh Nov 7, 2024
21425ef
Preprocess mlperf inference submission, code cleanup
arjunsuresh Nov 7, 2024
b40d0c5
Merge branch 'mlcommons:mlperf-inference' into mlperf-inference
arjunsuresh Nov 7, 2024
0c68370
Update test-mlperf-inference-intel
arjunsuresh Nov 7, 2024
ea39555
Update test-mlperf-inference-intel
arjunsuresh Nov 7, 2024
b8bdc0e
handle systems where sudo is absent
anandhu-eng Nov 7, 2024
7a8a152
Cleanups for MLPerf inference preprocess script, use inference dev br…
arjunsuresh Nov 7, 2024
bdda54a
Merge pull request #496 from GATEOverflow/mlperf-inference
arjunsuresh Nov 7, 2024
f80fe87
Update and rename test-mlperf-inference-intel to test-mlperf-inferenc…
arjunsuresh Nov 7, 2024
5200fcb
Support sample_ids_path in coco2014 accuracy script
arjunsuresh Nov 7, 2024
4547b5f
Merge branch 'mlperf-inference' into mlperf-inference
arjunsuresh Nov 7, 2024
0600c69
Added rocm device for AMD mlperf inference
arjunsuresh Nov 7, 2024
51794e4
Merge pull request #497 from GATEOverflow/mlperf-inference
arjunsuresh Nov 7, 2024
2c960c4
Create test-mlperf-inference-amd.yml
arjunsuresh Nov 7, 2024
50ad695
Update test-nvidia-mlperf-implementation.yml
arjunsuresh Nov 7, 2024
37c4ac2
Update and rename test-mlperf-inference-amd.yml to test-amd-mlperf-in…
arjunsuresh Nov 7, 2024
5c0b178
Rename test-amd-mlperf-inference-implementationsyml to test-amd-mlper…
arjunsuresh Nov 7, 2024
8578565
Rename test-mlperf-inference-intel.yml to test-intel-mlperf-inference…
arjunsuresh Nov 7, 2024
c540520
Update test-intel-mlperf-inference-implementations.yml
arjunsuresh Nov 7, 2024
dd5048f
Rename test-nvidia-mlperf-implementation.yml to test-nvidia-mlperf-in…
arjunsuresh Nov 7, 2024
940b1c3
Update test-mlperf-inference-mixtral.yml
arjunsuresh Nov 7, 2024
b7a6e16
Update _cm.yaml
arjunsuresh Nov 7, 2024
73a9499
Merge pull request #480 from mlcommons/amd-llama
arjunsuresh Nov 7, 2024
9d4d39a
Merge branch 'mlcommons:mlperf-inference' into mlperf-inference
arjunsuresh Nov 7, 2024
42a360d
Update test-intel-mlperf-inference-implementations.yml
arjunsuresh Nov 7, 2024
4d6f7a0
Update test-scc24-sdxl.yaml
arjunsuresh Nov 7, 2024
a0cc32d
Update test-mlperf-inference-llama2.yml
arjunsuresh Nov 7, 2024
8f07689
Update test-mlperf-inference-llama2.yml
arjunsuresh Nov 7, 2024
2164e66
Added a retry for git clone failure
arjunsuresh Nov 7, 2024
2231aea
Merge pull request #498 from GATEOverflow/mlperf-inference
arjunsuresh Nov 7, 2024
e2bb867
Added a retry for git clone failure
arjunsuresh Nov 7, 2024
58a2259
Merge branch 'mlperf-inference' into mlperf-inference
arjunsuresh Nov 7, 2024
5f28483
Merge pull request #499 from GATEOverflow/mlperf-inference
arjunsuresh Nov 7, 2024
ba46c63
Use custom version for dev branch of inference-src
arjunsuresh Nov 8, 2024
0e48b24
Merge branch 'mlperf-inference' into mlperf-inference
arjunsuresh Nov 8, 2024
12934db
Merge pull request #500 from GATEOverflow/mlperf-inference
arjunsuresh Nov 8, 2024
137cd39
Update default filename
anandhu-eng Nov 8, 2024
12bd285
Updated the file path shown to user
anandhu-eng Nov 8, 2024
a056d72
Update test-mlperf-inference-dlrm.yml
arjunsuresh Nov 8, 2024
1dd0379
Update test-mlperf-inference-mixtral.yml
arjunsuresh Nov 8, 2024
1977b73
Merge pull request #502 from mlcommons/anandhu-eng-patch-1
arjunsuresh Nov 8, 2024
b41b292
Merge branch 'mlcommons:mlperf-inference' into mlperf-inference
arjunsuresh Nov 8, 2024
24cd3cb
Response when cmd without sudo fails
anandhu-eng Nov 8, 2024
e65f6bd
Added torch_cuda for AMD llama2 quantization
arjunsuresh Nov 8, 2024
2a5e200
Delete project directory
arjunsuresh Nov 8, 2024
4392854
Update test-amd-mlperf-inference-implementations.yml
arjunsuresh Nov 8, 2024
123b85f
Merge pull request #506 from GATEOverflow/mlperf-inference
arjunsuresh Nov 8, 2024
a189c15
Merge branch 'main' into mlperf-inference
arjunsuresh Nov 8, 2024
11d77de
fix for system without sudo
anandhu-eng Nov 8, 2024
2f9e99c
Merge pull request #504 from mlcommons/anandhu-eng-patch-6
arjunsuresh Nov 8, 2024
9aecee8
Increment version to 0.3.25
arjunsuresh Nov 8, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
26 changes: 26 additions & 0 deletions .github/workflows/test-amd-mlperf-inference-implementations.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,26 @@
name: MLPerf Inference AMD implementations

on:
schedule:
- cron: "29 4 * * *" #to be adjusted

jobs:
build_nvidia:
if: github.repository_owner == 'gateoverflow'
runs-on: [ self-hosted, linux, x64, GO-spr ]
strategy:
fail-fast: false
matrix:
python-version: [ "3.12" ]
model: [ "llama2-70b-99.9" ]
steps:
- name: Test MLPerf Inference AMD (build only) ${{ matrix.model }}
run: |
if [ -f "gh_action_conda/bin/deactivate" ]; then source gh_action_conda/bin/deactivate; fi
python3 -m venv gh_action_conda
source gh_action_conda/bin/activate
export CM_REPOS=$HOME/GH_CM
pip install --upgrade cm4mlops
pip install tabulate
cm run script --tags=run-mlperf,inference,_all-scenarios,_full,_r4.1-dev --execution_mode=valid --pull_changes=yes --pull_inference_changes=yes --model=${{ matrix.model }} --submitter="MLCommons" --hw_name=IntelSPR.24c --implementation=amd --backend=pytorch --category=datacenter --division=open --scenario=Offline --docker_dt=yes --docker_it=no --docker_cm_repo=gateoverflow@cm4mlops --adr.compiler.tags=gcc --device=rocm --results_dir=$HOME/gh_action_results --submission_dir=$HOME/gh_action_submissions --clean --docker --quiet
# cm run script --tags=push,github,mlperf,inference,submission --repo_url=https://github.com/gateoverflow/mlperf_inference_unofficial_submissions_v5.0 --repo_branch=main --commit_message="Results from GH action on SPR.24c" --quiet --submission_dir=$HOME/gh_action_submissions --hw_name=IntelSPR.24c
5 changes: 3 additions & 2 deletions .github/workflows/test-cm-based-submission-generation.yml
Original file line number Diff line number Diff line change
Expand Up @@ -20,6 +20,7 @@ jobs:
division: ["closed", "open"]
category: ["datacenter", "edge"]
case: ["case-3", "case-7"]
action: ["run", "docker"]
exclude:
- os: macos-latest
- os: windows-latest
Expand All @@ -38,7 +39,7 @@ jobs:
- name: Pull repo where test cases are uploaded
run: |
git clone -b submission-generation-tests https://github.com/anandhu-eng/inference.git submission_generation_tests
- name: Run Submission Generation - ${{ matrix.case }} ${{ matrix.category }} ${{ matrix.division }}
- name: Run Submission Generation - ${{ matrix.case }} ${{ matrix.action }} ${{ matrix.category }} ${{ matrix.division }}
run: |
if [ "${{ matrix.case }}" == "case-3" ]; then
#results_dir="submission_generation_tests/case-3/"
Expand All @@ -49,6 +50,6 @@ jobs:
fi
# Dynamically set the log group to simulate a dynamic step name
echo "::group::$description"
cm run script --tags=generate,inference,submission --clean --preprocess_submission=yes --results_dir=submission_generation_tests/${{ matrix.case }}/ --run-checker --submitter=MLCommons --tar=yes --env.CM_TAR_OUTFILE=submission.tar.gz --division=${{ matrix.division }} --category=${{ matrix.category }} --env.CM_DETERMINE_MEMORY_CONFIGURATION=yes --quiet
cm ${{ matrix.action }} script --tags=generate,inference,submission --clean --preprocess_submission=yes --results_dir=submission_generation_tests/${{ matrix.case }}/ --run-checker --submitter=MLCommons --tar=yes --env.CM_TAR_OUTFILE=submission.tar.gz --division=${{ matrix.division }} --category=${{ matrix.category }} --env.CM_DETERMINE_MEMORY_CONFIGURATION=yes --quiet
echo "::endgroup::"

Original file line number Diff line number Diff line change
@@ -0,0 +1,26 @@
name: MLPerf Inference Intel implementations

on:
schedule:
- cron: "29 1 * * *" #to be adjusted

jobs:
build_nvidia:
if: github.repository_owner == 'gateoverflow'
runs-on: [ self-hosted, linux, x64, GO-spr ]
strategy:
fail-fast: false
matrix:
python-version: [ "3.12" ]
model: [ "resnet50", "bert-99" ]
steps:
- name: Test MLPerf Inference Intel ${{ matrix.model }}
run: |
if [ -f "gh_action_conda/bin/deactivate" ]; then source gh_action_conda/bin/deactivate; fi
python3 -m venv gh_action_conda
source gh_action_conda/bin/activate
export CM_REPOS=$HOME/GH_CM
pip install --upgrade cm4mlops
pip install tabulate
cm run script --tags=run-mlperf,inference,_all-scenarios,_submission,_full,_r4.1-dev --preprocess_submission=yes --execution_mode=valid --pull_changes=yes --pull_inference_changes=yes --model=${{ matrix.model }} --submitter="MLCommons" --hw_name=IntelSPR.24c --implementation=intel --backend=pytorch --category=datacenter --division=open --scenario=Offline --docker_dt=yes --docker_it=no --docker_cm_repo=gateoverflow@cm4mlops --adr.compiler.tags=gcc --device=cpu --results_dir=$HOME/gh_action_results --submission_dir=$HOME/gh_action_submissions --clean --docker --quiet
cm run script --tags=push,github,mlperf,inference,submission --repo_url=https://github.com/gateoverflow/mlperf_inference_unofficial_submissions_v5.0 --repo_branch=main --commit_message="Results from GH action on SPR.24c" --quiet --submission_dir=$HOME/gh_action_submissions --hw_name=IntelSPR.24c
2 changes: 1 addition & 1 deletion .github/workflows/test-mlperf-inference-dlrm.yml
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ jobs:
export CM_REPOS=$HOME/GH_CM
python3 -m pip install cm4mlops
cm pull repo
cm run script --tags=run-mlperf,inference,_performance-only --submitter="MLCommons" --model=dlrm-v2-99 --implementation=reference --backend=pytorch --category=datacenter --scenario=Offline --execution_mode=test --device=${{ matrix.device }} --docker --quiet --test_query_count=1 --target_qps=1 --docker_it=no --docker_cm_repo=gateoverflow@cm4mlops --adr.compiler.tags=gcc --hw_name=gh_action --docker_dt=yes --results_dir=$HOME/gh_action_results --clean
cm run script --tags=run-mlperf,inference,_performance-only --adr.mlperf-implementation.tags=_branch.dev --adr.mlperf-implementation.version=custom --submitter="MLCommons" --model=dlrm-v2-99 --implementation=reference --backend=pytorch --category=datacenter --scenario=Offline --execution_mode=test --device=${{ matrix.device }} --docker --quiet --test_query_count=1 --target_qps=1 --docker_it=no --docker_cm_repo=gateoverflow@cm4mlops --adr.compiler.tags=gcc --hw_name=gh_action --docker_dt=yes --results_dir=$HOME/gh_action_results --clean

build_intel:
if: github.repository_owner == 'gateoverflow_off'
Expand Down
9 changes: 5 additions & 4 deletions .github/workflows/test-mlperf-inference-llama2.yml
Original file line number Diff line number Diff line change
Expand Up @@ -5,12 +5,12 @@ name: MLPerf inference LLAMA 2 70B

on:
schedule:
- cron: "30 19 * * 4"
- cron: "30 2 * * 4"

jobs:
build_reference:
if: github.repository_owner == 'gateoverflow'
runs-on: [ self-hosted, GO-i9, linux, x64 ]
runs-on: [ self-hosted, GO-spr, linux, x64 ]
strategy:
fail-fast: false
matrix:
Expand All @@ -24,9 +24,10 @@ jobs:
source gh_action/bin/deactivate || python3 -m venv gh_action
source gh_action/bin/activate
export CM_REPOS=$HOME/GH_CM
python3 -m pip install cm4mlops
pip install cm4mlops
pip install tabulate
cm pull repo
python3 -m pip install "huggingface_hub[cli]"
pip install "huggingface_hub[cli]"
huggingface-cli login --token ${{ secrets.HF_TOKEN }} --add-to-git-credential
- name: Test MLPerf Inference LLAMA 2 70B reference implementation
run: |
Expand Down
8 changes: 5 additions & 3 deletions .github/workflows/test-mlperf-inference-mixtral.yml
Original file line number Diff line number Diff line change
Expand Up @@ -5,12 +5,12 @@ name: MLPerf inference MIXTRAL-8x7B

on:
schedule:
- cron: "30 20 * * *" # 30th minute and 20th hour => 20:30 UTC => 2 AM IST
- cron: "45 10 * * *" # 30th minute and 20th hour => 20:30 UTC => 2 AM IST

jobs:
build_reference:
if: github.repository_owner == 'gateoverflow'
runs-on: [ self-hosted, GO-i9, linux, x64 ]
runs-on: [ self-hosted, GO-spr, linux, x64 ]
strategy:
fail-fast: false
matrix:
Expand All @@ -24,7 +24,9 @@ jobs:
source gh_action/bin/deactivate || python3 -m venv gh_action
source gh_action/bin/activate
export CM_REPOS=$HOME/GH_CM
python3 -m pip install cm4mlops
pip install cm4mlops
pip install "huggingface_hub[cli]"
huggingface-cli login --token ${{ secrets.HF_TOKEN }} --add-to-git-credential
cm pull repo
cm run script --tags=run-mlperf,inference,_submission,_short --submitter="MLCommons" --model=mixtral-8x7b --implementation=reference --batch_size=1 --backend=${{ matrix.backend }} --category=datacenter --scenario=Offline --execution_mode=test --device=${{ matrix.device }} --docker_it=no --docker_cm_repo=gateoverflow@cm4mlops --adr.compiler.tags=gcc --hw_name=gh_action --docker_dt=yes --results_dir=$HOME/gh_action_results --submission_dir=$HOME/gh_action_submissions --docker --quiet --test_query_count=1 --target_qps=1 --clean --env.CM_MLPERF_MODEL_MIXTRAL_8X7B_DOWNLOAD_TO_HOST=yes --env.CM_MLPERF_DATASET_MIXTRAL_8X7B_DOWNLOAD_TO_HOST=yes
cm run script --tags=push,github,mlperf,inference,submission --repo_url=https://github.com/gateoverflow/mlperf_inference_test_submissions_v5.0 --repo_branch=main --commit_message="Results from self hosted Github actions - GO-i9" --quiet --submission_dir=$HOME/gh_action_submissions
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@ name: MLPerf Inference Nvidia implementations

on:
schedule:
- cron: "29 20 * * *" #to be adjusted
- cron: "49 19 * * *" #to be adjusted

jobs:
build_nvidia:
Expand All @@ -21,5 +21,6 @@ jobs:
source gh_action/bin/activate
export CM_REPOS=$HOME/GH_CM
pip install --upgrade cm4mlops
cm run script --tags=run-mlperf,inference,_all-scenarios,_submission,_full,_r4.1-dev --preprocess_submission=yes --execution_mode=valid --gpu_name=rtx_4090 --pull_changes=yes --model=${{ matrix.model }} --submitter="MLCommons" --hw_name=RTX4090x2 --implementation=nvidia --backend=tensorrt --category=datacenter,edge --division=closed --docker_dt=yes --docker_it=no --docker_cm_repo=gateoverflow@cm4mlops --adr.compiler.tags=gcc --device=cuda --use_dataset_from_host=yes --results_dir=$HOME/gh_action_results --submission_dir=$HOME/gh_action_submissions --clean --docker --quiet
pip install tabulate
cm run script --tags=run-mlperf,inference,_all-scenarios,_submission,_full,_r4.1-dev --preprocess_submission=yes --execution_mode=valid --gpu_name=rtx_4090 --pull_changes=yes --pull_inference_changes=yes --model=${{ matrix.model }} --submitter="MLCommons" --hw_name=RTX4090x2 --implementation=nvidia --backend=tensorrt --category=datacenter,edge --division=closed --docker_dt=yes --docker_it=no --docker_cm_repo=gateoverflow@cm4mlops --adr.compiler.tags=gcc --device=cuda --use_dataset_from_host=yes --results_dir=$HOME/gh_action_results --submission_dir=$HOME/gh_action_submissions --clean --docker --quiet
cm run script --tags=push,github,mlperf,inference,submission --repo_url=https://github.com/gateoverflow/mlperf_inference_unofficial_submissions_v5.0 --repo_branch=main --commit_message="Results from GH action on NVIDIA_RTX4090x2" --quiet --submission_dir=$HOME/gh_action_submissions --hw_name=RTX4090x2
4 changes: 2 additions & 2 deletions .github/workflows/test-scc24-sdxl.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@ name: MLPerf inference SDXL (SCC)

on:
schedule:
- cron: "20 14 * * *"
- cron: "35 19 * * *"

jobs:
build_reference:
Expand All @@ -29,7 +29,7 @@ jobs:
cm pull repo
cm run script --tags=run-mlperf,inference,_find-performance,_r4.1-dev,_short,_scc24-base --pull_changes=yes --model=sdxl --implementation=reference --backend=${{ matrix.backend }} --category=datacenter --scenario=Offline --execution_mode=test --device=${{ matrix.device }} --precision=${{ matrix.precision }} --docker --docker_it=no --docker_cm_repo=gateoverflow@cm4mlops --docker_dt=yes --quiet --results_dir=$HOME/scc_gh_action_results --submission_dir=$HOME/scc_gh_action_submissions --precision=float16 --env.CM_MLPERF_MODEL_SDXL_DOWNLOAD_TO_HOST=yes --clean
cm run script --tags=run-mlperf,inference,_r4.1-dev,_short,_scc24-base --model=sdxl --implementation=reference --backend=${{ matrix.backend }} --category=datacenter --scenario=Offline --execution_mode=test --device=${{ matrix.device }} --precision=${{ matrix.precision }} --docker --docker_it=no --docker_cm_repo=gateoverflow@cm4mlops --docker_dt=yes --quiet --results_dir=$HOME/scc_gh_action_results --submission_dir=$HOME/scc_gh_action_submissions --precision=float16 --env.CM_MLPERF_MODEL_SDXL_DOWNLOAD_TO_HOST=yes --clean
cm run script --tags=generate,inference,submission --clean --preprocess_submission=yes --run-checker --tar=yes --env.CM_TAR_OUTFILE=submission.tar.gz --division=open --category=datacenter --run_style=test --adr.submission-checker.tags=_short-run --quiet --submitter=MLCommons --submission_dir=$HOME/scc_gh_action_submissions --results_dir=$HOME/scc_gh_action_results/test_results
cm run script --tags=generate,inference,submission --clean --run-checker --tar=yes --env.CM_TAR_OUTFILE=submission.tar.gz --division=open --category=datacenter --run_style=test --adr.submission-checker.tags=_short-run --quiet --submitter=MLCommons --submission_dir=$HOME/scc_gh_action_submissions --results_dir=$HOME/scc_gh_action_results/test_results
cm run script --tags=push,github,mlperf,inference,submission --repo_url=https://github.com/gateoverflow/cm4mlperf-inference --repo_branch=mlperf-inference-results-scc24 --commit_message="Results from self hosted Github actions - NVIDIARTX4090" --quiet --submission_dir=$HOME/scc_gh_action_submissions

build_nvidia:
Expand Down
2 changes: 1 addition & 1 deletion VERSION
Original file line number Diff line number Diff line change
@@ -1 +1 @@
0.3.24
0.3.25
10 changes: 0 additions & 10 deletions project/mlperf-inference-v3.0-submissions/README.md

This file was deleted.

7 changes: 0 additions & 7 deletions project/mlperf-inference-v3.0-submissions/_cm.json

This file was deleted.

Loading
Loading