You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Originally posted by DanCard February 23, 2024
ggml-cuda.cu:3211: ERROR: CUDA kernel vec_dot_q5_K_q8_1_impl_vmmq has no device code compatible with CUDA arch 520. ggml-cuda.cu was compiled for: 520
This worked yesterday. I did a git pull, make clean, make and then get this error today.
nvidia rtx 3090
System: debian testing
Command line: ~/github/llama.cpp/main -m ~/models/miqu-1-70b.q5_K_M.gguf -c 0 -i --color -t 16 --n-gpu-layers 24 --temp 0.8 -p "bob"
I reverted previous two commits and issue went away. ~/github/llama.cpp$ git reset --hard HEAD~2
HEAD is now at 334f76f sync : ggml
The text was updated successfully, but these errors were encountered:
Discussed in #5685
Originally posted by DanCard February 23, 2024
ggml-cuda.cu:3211: ERROR: CUDA kernel vec_dot_q5_K_q8_1_impl_vmmq has no device code compatible with CUDA arch 520. ggml-cuda.cu was compiled for: 520
This worked yesterday. I did a git pull, make clean, make and then get this error today.
nvidia rtx 3090
System: debian testing
Command line:
~/github/llama.cpp/main -m ~/models/miqu-1-70b.q5_K_M.gguf -c 0 -i --color -t 16 --n-gpu-layers 24 --temp 0.8 -p "bob"
I reverted previous two commits and issue went away.
~/github/llama.cpp$ git reset --hard HEAD~2
HEAD is now at 334f76f sync : ggml
The text was updated successfully, but these errors were encountered: