Skip to content

Actions: li-plus/chatglm.cpp

All workflows

Actions

Loading...
Loading

Showing runs from all workflows
545 workflow runs
545 workflow runs

Filter by Event

Filter by Status

Filter by Branch

Filter by Actor

Dynamic memory allocation. Drop Baichuan/InternLM support in favor of llama.cpp.
Python package #242: Pull request #305 synchronize by li-plus
June 18, 2024 08:17 5m 1s dev
dev
June 18, 2024 08:17 5m 1s
Dynamic memory allocation. Drop Baichuan/InternLM support in favor of llama.cpp.
CMake #265: Pull request #305 synchronize by li-plus
June 18, 2024 08:17 3m 27s dev
dev
June 18, 2024 08:17 3m 27s
Dynamic memory allocation. Drop Baichuan/InternLM support in favor of llama.cpp.
Python package #241: Pull request #305 synchronize by li-plus
June 16, 2024 12:51 4m 47s dev
dev
June 16, 2024 12:51 4m 47s
Dynamic memory allocation. Drop Baichuan/InternLM support in favor of llama.cpp.
CMake #264: Pull request #305 synchronize by li-plus
June 16, 2024 12:51 3m 35s dev
dev
June 16, 2024 12:51 3m 35s
Dynamic memory allocation. Drop Baichuan/InternLM support in favor of llama.cpp.
Python package #240: Pull request #305 synchronize by li-plus
June 15, 2024 03:31 4m 20s dev
dev
June 15, 2024 03:31 4m 20s
Dynamic memory allocation. Drop Baichuan/InternLM support in favor of llama.cpp.
CMake #263: Pull request #305 synchronize by li-plus
June 15, 2024 03:31 4m 24s dev
dev
June 15, 2024 03:31 4m 24s
June 14, 2024 12:56 4m 21s
Disable shared library by default. Set default max_length in api serv…
Python package #239: Commit a0f2d4a pushed by li-plus
June 14, 2024 12:56 4m 39s main
June 14, 2024 12:56 4m 39s
Disable shared library by default. Set default max_length in api server.
Python package #238: Pull request #317 opened by li-plus
June 14, 2024 12:51 5m 12s glm4
June 14, 2024 12:51 5m 12s
Disable shared library by default. Set default max_length in api server.
CMake #261: Pull request #317 opened by li-plus
June 14, 2024 12:51 4m 26s glm4
June 14, 2024 12:51 4m 26s
Upload Python Package
Upload Python Package #20: Manually run by li-plus
June 14, 2024 07:53 1m 6s main
June 14, 2024 07:53 1m 6s
Publish Docker Image
Publish Docker Image #18: Manually run by li-plus
June 14, 2024 07:53 12m 25s main
June 14, 2024 07:53 12m 25s
Build Wheels
Build Wheels #18: Manually run by li-plus
June 14, 2024 07:53 22m 54s main
June 14, 2024 07:53 22m 54s
Fix regex lookahead for code input tokenization (#314)
Python package #237: Commit c9a4a70 pushed by li-plus
June 14, 2024 07:52 5m 18s main
June 14, 2024 07:52 5m 18s
Fix regex lookahead for code input tokenization (#314)
CMake #260: Commit c9a4a70 pushed by li-plus
June 14, 2024 07:52 4m 10s main
June 14, 2024 07:52 4m 10s
Fix regex lookahead for code input tokenization
Python package #236: Pull request #314 opened by li-plus
June 14, 2024 07:46 4m 55s glm4
June 14, 2024 07:46 4m 55s
Fix regex lookahead for code input tokenization
CMake #259: Pull request #314 opened by li-plus
June 14, 2024 07:46 4m 9s glm4
June 14, 2024 07:46 4m 9s
Use apply_chat_template to calculate tokens (#309)
CMake #258: Commit 6d671d2 pushed by li-plus
June 13, 2024 11:05 5m 4s main
June 13, 2024 11:05 5m 4s
Use apply_chat_template to calculate tokens (#309)
Python package #235: Commit 6d671d2 pushed by li-plus
June 13, 2024 11:05 4m 53s main
June 13, 2024 11:05 4m 53s
Dynamic memory allocation. Drop Baichuan/InternLM support in favor of llama.cpp.
Python package #233: Pull request #305 synchronize by li-plus
June 13, 2024 07:43 4m 14s dev
dev
June 13, 2024 07:43 4m 14s
Dynamic memory allocation. Drop Baichuan/InternLM support in favor of llama.cpp.
CMake #256: Pull request #305 synchronize by li-plus
June 13, 2024 07:43 5m 14s dev
dev
June 13, 2024 07:43 5m 14s
Build Wheels
Build Wheels #17: Manually run by li-plus
June 13, 2024 02:30 26m 53s main
June 13, 2024 02:30 26m 53s
Publish Docker Image
Publish Docker Image #17: Manually run by li-plus
June 13, 2024 02:30 11m 4s main
June 13, 2024 02:30 11m 4s