Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Test transformers 41 #12

Open
wants to merge 194 commits into
base: test_transformers_41
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
194 commits
Select commit Hold shift + click to select a range
9e65cf0
Add openai-whisper pytorch gpu (#11736)
lzivan Aug 8, 2024
107f7aa
enable inference mode for deepspeed tp serving (#11742)
liu-shaojun Aug 8, 2024
27b4b10
Add `qwen2-1.5b-instruct` into igpu performance (#11735)
JinheTang Aug 8, 2024
044e486
Fix vLLM CPU /chat endpoint (#11748)
xiangyuT Aug 9, 2024
d8808cc
Mistral apply_rotary_pos_emb_no_cache_xpu use rope_theta from config …
qiyuangong Aug 9, 2024
dd46c14
Phi3 support compresskv (#11733)
cyita Aug 9, 2024
7e917d6
fix gptq of llama (#11749)
rnwang04 Aug 9, 2024
93455aa
fix minicpm V 2.6 repeat output (#11753)
MeouSker77 Aug 9, 2024
4b9c57c
Support compress kv with lookahead (#11752)
cyita Aug 9, 2024
66fe2ee
initial support of `IPEX_LLM_PERFORMANCE_MODE` (#11754)
rnwang04 Aug 9, 2024
245dba0
Fix lightweight-serving codegeex error (#11759)
hzjane Aug 12, 2024
fac4c01
Revert to use out-of-tree GPU driver (#11761)
liu-shaojun Aug 12, 2024
57d1777
optimize minicpm-v-2_6 repetition penalty (#11763)
MeouSker77 Aug 12, 2024
05989ad
Update npu example and all in one benckmark (#11766)
JinBridger Aug 12, 2024
8db3405
optimize lookahead init time (#11769)
rnwang04 Aug 12, 2024
1b05cab
Set mistral fuse rope to false except fp6 & fp16 (#11765)
ATMxsp01 Aug 12, 2024
f97a77e
Update all-in-one benchmark for `continuation` task input preparation…
Oscilloscope98 Aug 12, 2024
841dbcd
Fix compresskv with lookahead issue (#11767)
cyita Aug 12, 2024
a1eb793
optimize minicpm v 2_6 firs token perf (#11770)
MeouSker77 Aug 13, 2024
81824ff
Fix stdout in all-in-one benchmark to utf-8 (#11772)
Oscilloscope98 Aug 13, 2024
c28b338
Update npu multimodal example (#11773)
JinBridger Aug 13, 2024
23d3acd
Add experimental support of fused decoder layer for llama2 (#11768)
plusbang Aug 13, 2024
aa861df
use new fp32 softmax kernel (#11776)
MeouSker77 Aug 13, 2024
a88c132
Reduce Mistral softmax memory only in low memory mode (#11775)
qiyuangong Aug 13, 2024
ec184af
Add `gemma-2-2b-it` and `gemma-2-9b-it` to igpu nightly performance t…
Oscilloscope98 Aug 13, 2024
a184b12
fix minicpm-v 2.5 (#11780)
MeouSker77 Aug 13, 2024
70c828b
deepspeed zero3 QLoRA finetuning (#11625)
Uxito-Ada Aug 13, 2024
3998de1
Fix mistral forward_qkv in q4_0 (#11781)
qiyuangong Aug 13, 2024
7cd6ec9
MiniCPM-V support compresskv (#11779)
cyita Aug 13, 2024
cb79dcd
refactor llama convert to fix minicpm-v 2.5 optimization (#11783)
MeouSker77 Aug 14, 2024
51bcac1
follow up on experimental support of fused decoder layer for llama2 (…
yangw1234 Aug 14, 2024
dbd1425
Troubleshoot for sycl not found (#11774)
JinheTang Aug 14, 2024
43cca3b
fix gemma2 runtime error caused by sliding window (#11788)
rnwang04 Aug 14, 2024
356281c
Further all-in-one benchmark update `continuation` task (#11784)
Oscilloscope98 Aug 14, 2024
3d6cfa2
optimize minicpm v 2.5 (#11793)
MeouSker77 Aug 14, 2024
d8d887e
added minicpm-v-2_6 (#11794)
JinheTang Aug 14, 2024
9a93808
fix and optimize minicpm v 2 (#11799)
MeouSker77 Aug 14, 2024
e3c1dae
Fix Windows Unit Test (#11801)
liu-shaojun Aug 14, 2024
016e840
Fix performance tests (#11802)
Oscilloscope98 Aug 14, 2024
3ac83f8
fix: delete ipex extension import in ppl wikitext evaluation (#11806)
cranechu0131 Aug 15, 2024
07b7f13
support and optimize qwen2-audio (#11809)
MeouSker77 Aug 15, 2024
f43da2d
deletion of specification of transformers version (#11808)
JinheTang Aug 15, 2024
2fbbb51
transformers==4.37, yi & yuan2 & vicuna (#11805)
JinheTang Aug 15, 2024
447c8ed
update transformers version for `replit-code-v1-3b`, `internlm2-chat-…
ch1y0q Aug 15, 2024
4e178f0
rewrite minicpmv optimization (#11816)
MeouSker77 Aug 15, 2024
828ab16
fix phi3 and minicpmv cpu (#11818)
MeouSker77 Aug 15, 2024
28d1c97
add mixed_precision argument on ppl wikitext evaluation (#11813)
cranechu0131 Aug 15, 2024
6543321
Remove 4k igpu perf on gemma-2-9b-it (#11820)
Oscilloscope98 Aug 15, 2024
750d4ad
fix minicpm-v-2 fp16 (#11819)
MeouSker77 Aug 15, 2024
e70ae06
Fix vLLM not convert issues (#11817)
gc-fu Aug 15, 2024
5a80fd2
Fix lightweight-serving no streaming resp on mtl (#11822)
hzjane Aug 16, 2024
9e9086c
Update `IPEX_LLM_PERFORMANCE_MODE` (#11823)
Oscilloscope98 Aug 16, 2024
6a8d07d
Update README.md (#11824)
jason-dai Aug 16, 2024
17a0beb
optimize qwen2-audio again (#11825)
MeouSker77 Aug 16, 2024
f463268
fix: add run oneAPI instruction for the example of codeshell (#11828)
cranechu0131 Aug 16, 2024
adfbb91
Reorganize MiniCPM-V-2_6 example & update others MiniCPM-V-2 exmaples…
JinheTang Aug 16, 2024
a508b0a
added link to minicpm-v-2_6 example (#11829)
JinheTang Aug 16, 2024
e07a556
Codegeex2 tokenization fix (#11831)
JinheTang Aug 16, 2024
3b630fb
updated ppl README (#11807)
RyuKosei Aug 16, 2024
e966e85
force lm_head optimization in any model if set environment variable (…
MeouSker77 Aug 16, 2024
96796f9
Update all-in-one benchmark prompts for `continuation` task & lookup …
Oscilloscope98 Aug 16, 2024
9f17234
Add MiniCPM-V-2_6 to iGPU Perf (#11810)
JinBridger Aug 16, 2024
580c94d
Remove gemma-2-9b-it 3k input from igpu-perf (#11834)
Oscilloscope98 Aug 17, 2024
46a1cbf
feat: add mixed_precision argument on ppl longbench evaluation (#11837)
cranechu0131 Aug 19, 2024
cfc959d
Fixes regarding utf-8 in all-in-one benchmark (#11839)
Oscilloscope98 Aug 19, 2024
6841a9a
fix load low bit com dtype (#11832)
leonardozcm Aug 19, 2024
3cd4e87
Support compress KV with quantize KV (#11812)
cyita Aug 19, 2024
9490781
optimize phi3 memory usage again (#11848)
MeouSker77 Aug 19, 2024
a0fbda5
add MiniCPM-Llama3-V-2_5 into all-in-one benchmark (#11849)
rnwang04 Aug 19, 2024
da3d7a3
delete transformers version requirement (#11845)
JinheTang Aug 19, 2024
99b05ba
separate prefill into a process (#11787)
yangw1234 Aug 19, 2024
7380823
Update Llama2 multi-processes example (#11852)
sgwhat Aug 19, 2024
2946420
add minicpmv 2.6 load_low_bit workaround (#11856)
MeouSker77 Aug 20, 2024
ee6852c
Fix typo (#11862)
Uxito-Ada Aug 20, 2024
5b83493
Add ipex-llm npu option in setup.py (#11858)
sgwhat Aug 20, 2024
d4ee0a8
optimize phi3 memory usage (#11867)
MeouSker77 Aug 20, 2024
5e8286f
Update `ipex-llm` default transformers version to 4.37.0 (#11859)
Oscilloscope98 Aug 20, 2024
0d58c2f
Update performance test regarding updated default `transformers==4.37…
Oscilloscope98 Aug 20, 2024
3ee194d
Pytorch models transformers version update (#11860)
JinheTang Aug 20, 2024
c3c0583
Update compresskv model forward type logic (#11868)
cyita Aug 20, 2024
5df0086
Update local import for ppl (#11866)
RyuKosei Aug 20, 2024
32f0a77
feat: update readme for ppl test (#11865)
cranechu0131 Aug 20, 2024
bdaeee1
Fix run_decoders bug (#11871)
yangw1234 Aug 20, 2024
37106a8
igpu performance test smal fix (#11872)
Oscilloscope98 Aug 20, 2024
eab6f6d
Spr perf small fix (#11874)
Oscilloscope98 Aug 21, 2024
bd1e490
fix phi3 (#11878)
MeouSker77 Aug 21, 2024
537c0d2
fix vllm qwen2 models (#11879)
gc-fu Aug 21, 2024
209d42a
Refactor npu mp to make it easier to integrate new models (#11873)
yangw1234 Aug 21, 2024
8c5c7f3
Update doc for running npu generate example with ipex-llm[npu] (#11876)
sgwhat Aug 21, 2024
0236de3
set IPEX_LLM_LAST_LM_HEAD=1 as default (#11885)
cyita Aug 21, 2024
cc27321
support chatglm4 in lookup (#11855)
cyita Aug 21, 2024
bdbe995
Update README.md (#11889)
lzivan Aug 22, 2024
86248b0
add compress_kv for baichuan2
hxsz1997 Aug 22, 2024
6bb9035
fix typos
hxsz1997 Aug 22, 2024
72a7bf6
Support qwen2-1.5b with fused decoderlayer optimization on NPU (#11888)
plusbang Aug 22, 2024
6a5ca17
fix typoes
hxsz1997 Aug 22, 2024
bac98ba
Make performance test install specific ipex-llm version from pypi (#1…
Oscilloscope98 Aug 22, 2024
4adaddd
fix typos
hxsz1997 Aug 22, 2024
2a0aa92
fix typos
hxsz1997 Aug 22, 2024
c6ed1c4
fix typos
hxsz1997 Aug 22, 2024
01ed397
fix typos
hxsz1997 Aug 22, 2024
8a5df93
fix typos
hxsz1997 Aug 22, 2024
48a827a
fix typos
hxsz1997 Aug 22, 2024
42398a0
add comment
hxsz1997 Aug 22, 2024
ce7de77
add comment of change in model forward
hxsz1997 Aug 22, 2024
a8e2573
added tokenization file for codegeex2-6b in pytorch-models(#11875)
JinheTang Aug 22, 2024
a2be3d7
add comment of compress kv in attention forward
hxsz1997 Aug 22, 2024
eb1e65f
add comment
hxsz1997 Aug 22, 2024
5c4ed00
Add lightweight-serving whisper asr example (#11847)
hzjane Aug 22, 2024
18662dc
change 5 pytorch/huggingface models to fp16 (#11894)
JinheTang Aug 22, 2024
c5b51d4
Update pypi tag to 2.2.0.dev0 (#11895)
liu-shaojun Aug 22, 2024
278b191
Fix optimize lm head error (#11899)
gc-fu Aug 22, 2024
794abe2
update npu-readme (#11900)
lzivan Aug 22, 2024
4cf03d6
update baichuan-7b
hxsz1997 Aug 22, 2024
420ce7d
Fix non-stop at eos token problem for lookup generation (#11896)
Oscilloscope98 Aug 22, 2024
4a61f7d
update mlp of llama (#11897)
rnwang04 Aug 22, 2024
650e6e6
Merge pull request #11891 from hxsz1997/baichuan2-compresskv
hxsz1997 Aug 23, 2024
4cf640c
update docker image tag to 2.2.0-SNAPSHOT (#11904)
liu-shaojun Aug 23, 2024
23631cd
disable lm_head opt for baichuan2-13b (#11905)
cyita Aug 23, 2024
303a090
Add lm_head optimization on NPU (#11903)
plusbang Aug 23, 2024
24c279e
Update `IPEX_LLM_PERFORMANCE_MODE` with input length threshold (#11908)
Oscilloscope98 Aug 23, 2024
e5dc4e9
disable outdated scheduled workflow (#11915)
liu-shaojun Aug 23, 2024
dd30377
Add troubleshooting about transpose value setting
plusbang Aug 26, 2024
019f725
[NPU] Add support for running mp minicpm model on npu (#11909)
sgwhat Aug 26, 2024
a0bbd8e
All-in-one benchmark update regarding performance mode for input leng…
Oscilloscope98 Aug 26, 2024
c1d07bc
Support streaming for lookup generation (#11922)
Oscilloscope98 Aug 26, 2024
5a8fc1b
update troubleshooting for llama.cpp and ollama (#11890)
ch1y0q Aug 26, 2024
7ca557a
LLM: Fix vLLM CPU convert error (#11926)
xiangyuT Aug 27, 2024
6c3eb1e
refactor from_pretrained API for NPU (#11927)
lzivan Aug 27, 2024
14dddfc
Update NPU example readme (#11931)
plusbang Aug 27, 2024
e246f1e
update llama3 npu example (#11933)
cyita Aug 27, 2024
b11b28e
update CORE_XE_VERSION to 2.6.0 (#11929)
liu-shaojun Aug 27, 2024
730d9ec
Add Qwen2-audio example (#11835)
ch1y0q Aug 27, 2024
7c8c9a0
Update benchmark script for NPU (#11932)
plusbang Aug 27, 2024
a81a329
[NPU] Add example for NPU multi-processing minicpm-1b model (#11935)
sgwhat Aug 27, 2024
e211a5b
update minicpm to meet latest refactor (#11937)
sgwhat Aug 27, 2024
b4b6ddf
NPU Baichuan2 Multi- Process example (#11928)
jenniew Aug 27, 2024
7f7f6c8
Quick fix benchmark script (#11938)
plusbang Aug 27, 2024
90f6929
Update npu baichuan2 (#11939)
lzivan Aug 27, 2024
bec00e2
Improve baichuan2 NPU performance (#11942)
plusbang Aug 27, 2024
460bc96
update version of llama.cpp / ollama (#11930)
rnwang04 Aug 27, 2024
23f51f8
update tag to 2.2.0-SNAPSHOT (#11947)
liu-shaojun Aug 28, 2024
db0a280
test main 3072-384
songhappy Aug 28, 2024
27f75f5
main 3k test
songhappy Aug 28, 2024
12865dd
log conf
songhappy Aug 28, 2024
e23549f
Update llamaindex examples (#11940)
hxsz1997 Aug 28, 2024
6a6549f
41 new run
songhappy Aug 28, 2024
ec67ee7
added accelerate version specification in open webui quickstart(#11948)
JinheTang Aug 28, 2024
b38fb67
[NPU] lm head to cpu (#11943)
cyita Aug 28, 2024
0a7bd27
Add vllm awq loading logic (#11950)
gc-fu Aug 28, 2024
0fbb102
use sdp_causal to reduce internvl2-4b memory usage if set environment…
MeouSker77 Aug 28, 2024
5ca7390
[NPU] Add minicpm-2b support for npu multi-processing (#11949)
sgwhat Aug 28, 2024
63ac5f6
Refactor NPU baichuan multiple-process (#11945)
jenniew Aug 28, 2024
71f03dc
Support qwen2-7b with fused decoderlayer optimization on NPU (#11912)
plusbang Aug 29, 2024
882f4a5
Add lnl npu driver recommend version and enable cpu_lm_head on llama3…
cyita Aug 29, 2024
5f7ff76
update troubleshooting (#11960)
cyita Aug 29, 2024
6fc9340
restore ollama webui quickstart (#11955)
JinheTang Aug 29, 2024
7abe17d
Update MiniCPM-V-2_6 Example (#11958)
Oscilloscope98 Aug 29, 2024
14b2c8d
Update qwen2-7b example script (#11961)
plusbang Aug 29, 2024
431affd
Update README.md (#11964)
jason-dai Aug 29, 2024
2e49e1f
Further fix for MiniCPM-V-2_6 example (#11965)
Oscilloscope98 Aug 29, 2024
a9e485e
Support MiniCPM-V-2_6 multi-modal benchmarking with latency text stre…
Oscilloscope98 Aug 29, 2024
fbf088f
remove obselete npu code (#11967)
yangw1234 Aug 29, 2024
77b04ef
add notes for `SYCL_PI_LEVEL_ZERO_USE_IMMEDIATE_COMMANDLISTS` (#11936)
ch1y0q Aug 30, 2024
7d10341
Fix glm4-9b-chat nan error on vllm 0.3.3 (#11970)
hzjane Aug 30, 2024
cd07788
Disable lm head (#11972)
plusbang Aug 30, 2024
e895e1b
modification on llamacpp readme after Ipex-llm latest update (#11971)
JinheTang Aug 30, 2024
1e8c870
fix model path (#11973)
liu-shaojun Aug 30, 2024
ae7302a
add gptq option for ppl test (#11921)
cranechu0131 Aug 30, 2024
158289d
[NPU] Add initial support for minicpm-llama-v2.5 (#11962)
sgwhat Aug 30, 2024
60aa1a2
Initial NPU support for MiniCPM-V-2_6 (#11966)
rnwang04 Aug 30, 2024
573c20b
fix npu lm_head cpu condition (#11976)
rnwang04 Aug 30, 2024
4811a49
small fix (#11978)
rnwang04 Aug 30, 2024
79978e6
update npu multimodal readme (#11979)
rnwang04 Aug 30, 2024
9e5518e
Merge branch 'main' of https://github.com/intel-analytics/ipex-llm in…
songhappy Aug 30, 2024
f325660
for cpu
songhappy Aug 30, 2024
65e281b
Add MiniCPM-V cpu example (#11975)
JinBridger Sep 2, 2024
c48817b
Support Qwen2-7b MLP in int4 and transpose_value_cache=True (#11968)
yangw1234 Sep 2, 2024
a40ea70
Fix AttributeError of qwen2-1.5B (#11990)
plusbang Sep 2, 2024
2f3d1bd
hotfix qwen2-7b weight setting (#11991)
plusbang Sep 2, 2024
659d15d
Fix wrong attention mask and garbage output for `inputs_embeds` input…
Oscilloscope98 Sep 2, 2024
01099f0
Revert prefill logic of qwen2-7b (#11992)
plusbang Sep 3, 2024
643458d
Update GraphRAG QuickStart (#11995)
Oscilloscope98 Sep 3, 2024
2e54f44
Rename MiniCPM-V-2_6 CPU example (#11998)
JinBridger Sep 3, 2024
164f47a
MiniCPM-V-2 & MiniCPM-Llama3-V-2_5 example updates (#11988)
JinheTang Sep 3, 2024
6eb5565
Performance mode strategy update for input_embeds input (#11997)
Oscilloscope98 Sep 3, 2024
9eaff5e
add save & load support for NPU optimized model (#11999)
rnwang04 Sep 3, 2024
2b993ad
vllm update for glm-4 model automatic not_convert (#12003)
hzjane Sep 4, 2024
77cb348
fix dependabot alerts (#12006)
liu-shaojun Sep 4, 2024
b1408a1
fix UT (#12005)
MeouSker77 Sep 4, 2024
c6348a4
Update action.yml (#12016)
liu-shaojun Sep 4, 2024
75b19f8
revert actions/download-artifact version to 3 (#12017)
liu-shaojun Sep 4, 2024
428e62b
update
songhappy Sep 4, 2024
7d8f3a0
new main Merge branch 'main' of https://github.com/intel-analytics/ip…
songhappy Sep 4, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
25 changes: 0 additions & 25 deletions .github/actions/llm/cli-test-windows/action.yml

This file was deleted.

16 changes: 8 additions & 8 deletions .github/workflows/llm-c-evaluation.yml
Original file line number Diff line number Diff line change
Expand Up @@ -10,12 +10,12 @@ permissions:

# Controls when the action will run.
on:
schedule:
- cron: "00 15 * * *" # GMT time, 15:00 GMT == 23:00 Beijing Time
pull_request:
branches: [main]
paths:
- ".github/workflows/llm-c-evaluation.yml"
# schedule:
# - cron: "00 15 * * *" # GMT time, 15:00 GMT == 23:00 Beijing Time
# pull_request:
# branches: [main]
# paths:
# - ".github/workflows/llm-c-evaluation.yml"
# Allows you to run this workflow manually from the Actions tab
workflow_dispatch:
inputs:
Expand Down Expand Up @@ -204,7 +204,7 @@ jobs:
pip install pandas==1.5.3

- name: Download ceval results
uses: actions/download-artifact@v3
uses: actions/download-artifact@4.1.7
with:
name: ceval_results
path: results
Expand Down Expand Up @@ -259,7 +259,7 @@ jobs:
fi

- name: Download ceval results
uses: actions/download-artifact@v3
uses: actions/download-artifact@4.1.7
with:
name: results_${{ needs.set-matrix.outputs.date }}
path: ${{ env.ACC_FOLDER }}
Expand Down
16 changes: 8 additions & 8 deletions .github/workflows/llm-harness-evaluation.yml
Original file line number Diff line number Diff line change
Expand Up @@ -10,12 +10,12 @@ permissions:

# Controls when the action will run.
on:
schedule:
- cron: "30 12 * * *" # GMT time, 12:30 GMT == 20:30 China
pull_request:
branches: [main]
paths:
- ".github/workflows/llm-harness-evaluation.yml"
# schedule:
# - cron: "30 12 * * *" # GMT time, 12:30 GMT == 20:30 China
# pull_request:
# branches: [main]
# paths:
# - ".github/workflows/llm-harness-evaluation.yml"
# Allows you to run this workflow manually from the Actions tab
workflow_dispatch:
inputs:
Expand Down Expand Up @@ -220,7 +220,7 @@ jobs:
pip install --upgrade pip
pip install jsonlines pytablewriter regex
- name: Download all results
uses: actions/download-artifact@v3
uses: actions/download-artifact@4.1.7
with:
name: harness_results
path: results
Expand Down Expand Up @@ -260,7 +260,7 @@ jobs:
fi

- name: Download harness results
uses: actions/download-artifact@v3
uses: actions/download-artifact@4.1.7
with:
name: harness_results
path: ${{ env.ACC_FOLDER}}/${{ env.DATE }}
Expand Down
16 changes: 8 additions & 8 deletions .github/workflows/llm-ppl-evaluation.yml
Original file line number Diff line number Diff line change
Expand Up @@ -10,12 +10,12 @@ permissions:

# Controls when the action will run.
on:
schedule:
- cron: "00 12 * * *" # GMT time, 12:00 GMT == 20:00 China
pull_request:
branches: [main]
paths:
- ".github/workflows/llm-ppl-evaluation.yml"
# schedule:
# - cron: "00 12 * * *" # GMT time, 12:00 GMT == 20:00 China
# pull_request:
# branches: [main]
# paths:
# - ".github/workflows/llm-ppl-evaluation.yml"
# Allows you to run this workflow manually from the Actions tab
workflow_dispatch:
inputs:
Expand Down Expand Up @@ -206,7 +206,7 @@ jobs:
pip install --upgrade pip
pip install jsonlines pytablewriter regex
- name: Download all results
uses: actions/download-artifact@v3
uses: actions/download-artifact@4.1.7
with:
name: ppl_results
path: results
Expand Down Expand Up @@ -245,7 +245,7 @@ jobs:
fi

- name: Download ppl results
uses: actions/download-artifact@v3
uses: actions/download-artifact@4.1.7
with:
name: ppl_results
path: ${{ env.ACC_FOLDER}}/${{ env.DATE }}
Expand Down
16 changes: 8 additions & 8 deletions .github/workflows/llm-whisper-evaluation.yml
Original file line number Diff line number Diff line change
Expand Up @@ -10,12 +10,12 @@ permissions:

# Controls when the action will run.
on:
schedule:
- cron: "00 13 * * *" # GMT time, 13:00 GMT == 21:00 China
pull_request:
branches: [main]
paths:
- ".github/workflows/llm-whisper-evaluation.yml"
# schedule:
# - cron: "00 13 * * *" # GMT time, 13:00 GMT == 21:00 China
# pull_request:
# branches: [main]
# paths:
# - ".github/workflows/llm-whisper-evaluation.yml"
# Allows you to run this workflow manually from the Actions tab
workflow_dispatch:
inputs:
Expand Down Expand Up @@ -176,14 +176,14 @@ jobs:

- name: Download all results for nightly run
if: github.event_name == 'schedule'
uses: actions/download-artifact@v3
uses: actions/download-artifact@4.1.7
with:
name: whisper_results
path: ${{ env.NIGHTLY_FOLDER}}/${{ env.OUTPUT_PATH }}

- name: Download all results for pr run
if: github.event_name == 'pull_request'
uses: actions/download-artifact@v3
uses: actions/download-artifact@4.1.7
with:
name: whisper_results
path: ${{ env.PR_FOLDER}}/${{ env.OUTPUT_PATH }}
Expand Down
Loading