Skip to content

Actions: cpumaxx/llama.cpp

Code Coverage

Actions

Loading...
Loading

Show workflow options

Create status badge

Loading
5 workflow runs
5 workflow runs

Filter by Event

Filter by Status

Filter by Branch

Filter by Actor

Adding support for the --numa argument for llama-bench. (#7080)
Code Coverage #5: Commit 628b299 pushed by cpumaxx
May 6, 2024 00:23 1m 52s master
May 6, 2024 00:23 1m 52s
readme : add note that LLaMA 3 is not supported with convert.py (#7065)
Code Coverage #4: Commit ca36326 pushed by cpumaxx
May 5, 2024 06:41 1m 56s master
May 5, 2024 06:41 1m 56s
Fix more int overflow during quant (PPL/CUDA). (#6563)
Code Coverage #2: Commit e00b4a8 pushed by cpumaxx
April 28, 2024 22:51 1h 22m 24s llava-cli-remerge
April 28, 2024 22:51 1h 22m 24s
Fix more int overflow during quant (PPL/CUDA). (#6563)
Code Coverage #1: Commit e00b4a8 pushed by cpumaxx
April 28, 2024 22:47 1m 57s master
April 28, 2024 22:47 1m 57s