Skip to content

Actions: 3Simplex/llama.cpp

Server

Actions

Loading...
Loading

Show workflow options

Create status badge

Loading
49 workflow runs
49 workflow runs

Filter by Event

Filter by Status

Filter by Branch

Filter by Actor

Update building for Android (#9672)
Server #24: Commit f1af42f pushed by 3Simplex
October 7, 2024 17:02 25m 8s master
October 7, 2024 17:02 25m 8s
llama : print correct model type for Llama 3.2 1B and 3B
Server #23: Commit a90484c pushed by 3Simplex
October 1, 2024 12:40 14m 18s master
October 1, 2024 12:40 14m 18s
CUDA: fix variable name conflict for Windows build (#9382)
Server #22: Commit 8e6e2fb pushed by 3Simplex
September 9, 2024 12:40 31m 16s master
September 9, 2024 12:40 31m 16s
readme : refactor API section + remove old hot topics
Server #21: Commit b69a480 pushed by 3Simplex
September 3, 2024 16:33 9m 28s master
September 3, 2024 16:33 9m 28s
docker : update CUDA images (#9213)
Server #20: Commit 66b039a pushed by 3Simplex
August 28, 2024 15:13 20m 23s master
August 28, 2024 15:13 20m 23s
server : support reading arguments from environment variables (#9105)
Server #19: Commit fc54ef0 pushed by 3Simplex
August 21, 2024 16:12 18m 15s master
August 21, 2024 16:12 18m 15s
docs: introduce gpustack and gguf-parser (#8873)
Server #18: Commit 84eb2f4 pushed by 3Simplex
August 12, 2024 13:09 9m 11s master
August 12, 2024 13:09 9m 11s
flake.lock: Update (#8979)
Server #17: Commit 8cd1bcf pushed by 3Simplex
August 11, 2024 15:31 9m 35s master
August 11, 2024 15:31 9m 35s
scripts : sync cann files (#0)
Server #16: Commit afd27f0 pushed by 3Simplex
August 8, 2024 12:57 24m 13s master
August 8, 2024 12:57 24m 13s
August 6, 2024 12:49 8m 55s
cann: Fix ggml_cann_im2col for 1D im2col (#8819)
Server #14: Commit e09a800 pushed by 3Simplex
August 2, 2024 12:02 26m 8s master
August 2, 2024 12:02 26m 8s
August 1, 2024 18:01 22m 50s
nix: cuda: rely on propagatedBuildInputs (#8772)
Server #12: Commit 268c566 pushed by 3Simplex
July 31, 2024 12:04 8m 50s master
July 31, 2024 12:04 8m 50s
llama : add support for llama 3.1 rope scaling factors (#8676)
Server #11: Commit b5e9546 pushed by 3Simplex
July 27, 2024 13:04 10m 1s master
July 27, 2024 13:04 10m 1s
ggml : reduce hash table reset cost (#8698)
Server #10: Commit 2b1f616 pushed by 3Simplex
July 27, 2024 04:09 9m 33s master
July 27, 2024 04:09 9m 33s
llama : fix order of parameters (#8706)
Server #9: Commit 01245f5 pushed by 3Simplex
July 26, 2024 12:24 8m 35s master
July 26, 2024 12:24 8m 35s
llama : fix build + fix fabs compile warnings (#8683)
Server #8: Commit 4226a8d pushed by 3Simplex
July 25, 2024 17:33 8m 21s master
July 25, 2024 17:33 8m 21s
[SYCL] fix multi-gpu issue on sycl (#8554)
Server #7: Commit ed67bcb pushed by 3Simplex
July 25, 2024 11:46 8m 38s master
July 25, 2024 11:46 8m 38s
readme : update games list (#8673)
Server #6: Commit 68504f0 pushed by 3Simplex
July 24, 2024 21:16 9m 31s master
July 24, 2024 21:16 9m 31s
flake.lock: Update (#8610)
Server #5: Commit 45f2c19 pushed by 3Simplex
July 21, 2024 22:57 10m 4s master
July 21, 2024 22:57 10m 4s
flake.lock: Update (#8475)
Server #4: Commit aaab241 pushed by 3Simplex
July 14, 2024 20:15 8m 20s master
July 14, 2024 20:15 8m 20s
server : handle content array in chat API (#8449)
Server #3: Commit 4e24cff pushed by 3Simplex
July 12, 2024 15:33 8m 48s master
July 12, 2024 15:33 8m 48s
CUDA: optimize and refactor MMQ (#8416)
Server #2: Commit 808aba3 pushed by 3Simplex
July 11, 2024 15:18 8m 16s master
July 11, 2024 15:18 8m 16s
July 10, 2024 20:27 1h 20m 46s