Skip to content

Actions: 3Simplex/llama.cpp

Server

Actions

Loading...
Loading

Show workflow options

Create status badge

Loading
49 workflow runs
49 workflow runs

Filter by Event

Filter by Status

Filter by Branch

Filter by Actor

llama : remove unused headers (#11109)
Server #49: Commit ecebbd2 pushed by 3Simplex
January 6, 2025 21:50 5m 40s master
January 6, 2025 21:50 5m 40s
server : fix missing model id in /model endpoint (#10957)
Server #48: Commit 14b699e pushed by 3Simplex
December 23, 2024 17:18 5m 5s master
December 23, 2024 17:18 5m 5s
clip : disable GPU support (#10896)
Server #47: Commit d408bb9 pushed by 3Simplex
December 19, 2024 22:00 5m 34s master
December 19, 2024 22:00 5m 34s
rwkv6: add wkv6 support for Vulkan backend (#10829)
Server #46: Commit 160bc03 pushed by 3Simplex
December 16, 2024 21:27 5m 15s master
December 16, 2024 21:27 5m 15s
CUDA: fix shared memory access condition for mmv (#10740)
Server #45: Commit 26a8406 pushed by 3Simplex
December 10, 2024 14:26 4m 26s master
December 10, 2024 14:26 4m 26s
llama : use cmake for swift build (#10525)
Server #44: Commit 43ed389 pushed by 3Simplex
December 8, 2024 16:35 5m 20s master
December 8, 2024 16:35 5m 20s
sync : ggml
Server #43: Commit 0cd182e pushed by 3Simplex
December 5, 2024 16:56 11m 4s master
December 5, 2024 16:56 11m 4s
ggml : add predefined list of CPU backend variants to build (#10626)
Server #42: Commit 59f4db1 pushed by 3Simplex
December 4, 2024 13:50 5m 58s master
December 4, 2024 13:50 5m 58s
December 3, 2024 19:01 5m 21s
December 3, 2024 15:42 6m 1s
server : add speculative decoding support (#10455)
Server #39: Commit 9ca2e67 pushed by 3Simplex
November 25, 2024 15:58 8m 25s master
November 25, 2024 15:58 8m 25s
ci: Update oneAPI runtime dll packaging (#10428)
Server #38: Commit 6dfcfef pushed by 3Simplex
November 22, 2024 13:36 19m 35s master
November 22, 2024 13:36 19m 35s
vulkan: predicate max operation in soft_max shaders/soft_max (#10437)
Server #37: Commit 9abe9ee pushed by 3Simplex
November 20, 2024 19:51 7m 13s master
November 20, 2024 19:51 7m 13s
cuda : fix CUDA_FLAGS not being applied (#10403)
Server #36: Commit 3ee6382 pushed by 3Simplex
November 19, 2024 14:18 8m 5s master
November 19, 2024 14:18 8m 5s
Skip searching root path for cross-compile builds (#10383)
Server #35: Commit 531cb1c pushed by 3Simplex
November 18, 2024 15:56 21m 26s master
November 18, 2024 15:56 21m 26s
server: (web UI) Add samplers sequence customization (#10255)
Server #34: Commit bcdb7a2 pushed by 3Simplex
November 16, 2024 17:17 12m 28s master
November 16, 2024 17:17 12m 28s
November 12, 2024 13:43 7m 45s
metal : reorder write loop in mul mat kernel + style (#10231)
Server #32: Commit 6423c65 pushed by 3Simplex
November 9, 2024 15:21 8m 29s master
November 9, 2024 15:21 8m 29s
DRY: Fixes clone functionality (#10192)
Server #31: Commit 5107e8c pushed by 3Simplex
November 7, 2024 16:00 8m 46s master
November 7, 2024 16:00 8m 46s
server : fix smart selection of available slot (#10120)
Server #30: Commit d865d14 pushed by 3Simplex
November 1, 2024 15:18 8m 24s master
November 1, 2024 15:18 8m 24s
llama : Add IBM granite template (#10013)
Server #29: Commit 61715d5 pushed by 3Simplex
October 28, 2024 21:36 18m 5s master
October 28, 2024 21:36 18m 5s
lora : warn user if new token is added in the adapter (#9948)
Server #28: Commit c421ac0 pushed by 3Simplex
October 22, 2024 12:24 31m 7s master
October 22, 2024 12:24 31m 7s
rpc : pack only RPC structs (#9959)
Server #27: Commit d5ebd79 pushed by 3Simplex
October 21, 2024 12:34 30m 9s master
October 21, 2024 12:34 30m 9s
October 20, 2024 12:53 5m 49s
rpc : backend refactoring (#9912)
Server #25: Commit afd9909 pushed by 3Simplex
October 18, 2024 14:30 7m 30s master
October 18, 2024 14:30 7m 30s