We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ggerganov/llama.cpp#964 以下是 LoRA 的一些未决问题:
基本实现 (ggerganov/llama.cpp#820) 使用 SIMD (AVX, AVX2) 缩短 LoRA 应用时间 (ggerganov/llama.cpp#956) 在基本模型上使用 MMAP 缩短 LoRA 加载时间 量化已应用 LoRA 的 MMAPed float16 基本模型 权重插值(从 1 开始,查看多个)(ggerganov/llama.cpp#905) 将加载的模型导出到二进制文件(在带有LoRA(标志)的CLI中独立;交互式(?))(--export-lorahttps://github.com/ggerganov/llama.cpp/issues/904) 研究为任意模型提取 LoRA(请参阅 huggingface/peft#312)
The text was updated successfully, but these errors were encountered:
No branches or pull requests
ggerganov/llama.cpp#964
以下是 LoRA 的一些未决问题:
基本实现 (ggerganov/llama.cpp#820)
使用 SIMD (AVX, AVX2) 缩短 LoRA 应用时间 (ggerganov/llama.cpp#956)
在基本模型上使用 MMAP 缩短 LoRA 加载时间
量化已应用 LoRA 的 MMAPed float16 基本模型
权重插值(从 1 开始,查看多个)(ggerganov/llama.cpp#905)
将加载的模型导出到二进制文件(在带有LoRA(标志)的CLI中独立;交互式(?))(--export-lorahttps://github.com/ggerganov/llama.cpp/issues/904)
研究为任意模型提取 LoRA(请参阅 huggingface/peft#312)
The text was updated successfully, but these errors were encountered: