-
Notifications
You must be signed in to change notification settings - Fork 980
Pull requests: abetlen/llama-cpp-python
Author
Label
Projects
Milestones
Reviews
Assignee
Sort
Pull requests list
Fix: add missing 'seed' attribute to llama_context_params initialization
#1845
opened Nov 27, 2024 by
sergey21000
Loading…
chore(deps): bump pypa/cibuildwheel from 2.21.1 to 2.22.0
dependencies
Pull requests that update a dependency file
github_actions
Pull requests that update GitHub Actions code
#1844
opened Nov 25, 2024 by
dependabot
bot
Loading…
Add musa_simple Dockerfile for supporting Moore Threads GPU
#1842
opened Nov 25, 2024 by
yeahdongcn
Loading…
3 tasks done
use n_threads param to call _embed_image_bytes fun
#1834
opened Nov 16, 2024 by
KenForever1
Loading…
chore(deps): bump conda-incubator/setup-miniconda from 3.0.4 to 3.1.0
dependencies
Pull requests that update a dependency file
github_actions
Pull requests that update GitHub Actions code
#1821
opened Nov 4, 2024 by
dependabot
bot
Loading…
docs: Remove ref to llama_eval in llama_cpp.py docs
#1819
opened Nov 2, 2024 by
richdougherty
Loading…
Support LoRA hotswapping and multiple LoRAs at a time
#1817
opened Oct 30, 2024 by
richdougherty
Loading…
fix: make content not required in ChatCompletionRequestAssistantMessage
#1807
opened Oct 21, 2024 by
feloy
Loading…
fix: Avoid thread starvation on many concurrent requests by making use of asyncio to lock llama_proxy context
#1798
opened Oct 15, 2024 by
gjpower
Loading…
fix: added missing exit_stack.close() to /v1/chat/completions
#1796
opened Oct 14, 2024 by
Ian321
Loading…
Fix: Refactor Batching notebook to use new sampler chain API
#1793
opened Oct 13, 2024 by
lukestanley
Loading…
server types: Move 'model' parameter to clarify it is used
#1786
opened Oct 5, 2024 by
domdomegg
Loading…
Previous Next
ProTip!
Find all pull requests that aren't related to any open issues with -linked:issue.