Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bug: failed compile rocm build on windows using cmake #7743

Closed
sorasoras opened this issue Jun 4, 2024 · 8 comments
Closed

Bug: failed compile rocm build on windows using cmake #7743

sorasoras opened this issue Jun 4, 2024 · 8 comments
Labels
bug-unconfirmed high severity Used to report high severity bugs in llama.cpp (Malfunctioning hinder important workflow)

Comments

@sorasoras
Copy link

sorasoras commented Jun 4, 2024

What happened?

cmake .. -G "Ninja" -DCMAKE_BUILD_TYPE=Release -DLLAMA_HIPBLAS=ON -DCMAKE_C_COMPILER="C:/Program Files/AMD/ROCm/5.7/bin/clang.exe" -DCMAKE_CXX_COMPILER="C:/Program Files/AMD/ROCm/5.7/bin/clang++.exe" -DAMDGPU_TARGETS="gfx1100"

Name and Version

version 3083 rocm 5.7

What operating system are you seeing the problem on?

Windows11

Relevant log output

cmake --build . -j 99 --config Release
[2/71] Linking CXX executable bin\benchmark.exe
FAILED: bin/benchmark.exe
cmd.exe /C "cd . && C:\PROGRA~1\AMD\ROCm\5.7\bin\CLANG_~1.EXE -fuse-ld=lld-link -nostartfiles -nostdlib -O3 -DNDEBUG -D_DLL -D_MT -Xclang --dependent-lib=msvcrt -Xlinker /subsystem:console common/CMakeFiles/build_info.dir/build-info.cpp.obj examples/benchmark/CMakeFiles/benchmark.dir/benchmark-matmult.cpp.obj -o bin\benchmark.exe -Xlinker /MANIFEST:EMBED -Xlinker /implib:examples\benchmark\benchmark.lib -Xlinker /pdb:bin\benchmark.pdb -Xlinker /version:0.0   llama.lib  --hip-link  --offload-arch=gfx1100  "C:/Program Files/AMD/ROCm/5.7/lib/hipblas.lib"  "C:/Program Files/AMD/ROCm/5.7/lib/rocblas.lib"  "C:/Program Files/AMD/ROCm/5.7/lib/clang/17.0.0/lib/windows/clang_rt.builtins-x86_64.lib"  "C:/Program Files/AMD/ROCm/5.7/lib/amdhip64.lib"  -lkernel32 -luser32 -lgdi32 -lwinspool -lshell32 -lole32 -loleaut32 -luuid -lcomdlg32 -ladvapi32 -loldnames  && cd ."
lld-link: error: undefined symbol: __declspec(dllimport) omp_get_max_threads
>>> referenced by llama.lib(ggml.c.obj):(ggml_graph_compute)
>>> referenced by llama.lib(ggml.c.obj):(ggml_graph_compute)

lld-link: error: undefined symbol: __kmpc_global_thread_num
>>> referenced by llama.lib(ggml.c.obj):(ggml_graph_compute)

lld-link: error: undefined symbol: __kmpc_push_num_threads
>>> referenced by llama.lib(ggml.c.obj):(ggml_graph_compute)

lld-link: error: undefined symbol: __kmpc_fork_call
>>> referenced by llama.lib(ggml.c.obj):(ggml_graph_compute)

lld-link: error: undefined symbol: __kmpc_single
>>> referenced by llama.lib(ggml.c.obj):(.omp_outlined.)

lld-link: error: undefined symbol: __declspec(dllimport) omp_get_num_threads
>>> referenced by llama.lib(ggml.c.obj):(.omp_outlined.)

lld-link: error: undefined symbol: __kmpc_end_single
>>> referenced by llama.lib(ggml.c.obj):(.omp_outlined.)

lld-link: error: undefined symbol: __kmpc_barrier
>>> referenced by llama.lib(ggml.c.obj):(.omp_outlined.)

lld-link: error: undefined symbol: __declspec(dllimport) omp_get_thread_num
>>> referenced by llama.lib(ggml.c.obj):(.omp_outlined.)
CLANG_~1: error: linker command failed with exit code 1 (use -v to see invocation)
[3/71] Linking CXX executable bin\quantize-stats.exe
FAILED: bin/quantize-stats.exe
cmd.exe /C "cd . && C:\PROGRA~1\AMD\ROCm\5.7\bin\CLANG_~1.EXE -fuse-ld=lld-link -nostartfiles -nostdlib -O3 -DNDEBUG -D_DLL -D_MT -Xclang --dependent-lib=msvcrt -Xlinker /subsystem:console common/CMakeFiles/build_info.dir/build-info.cpp.obj examples/quantize-stats/CMakeFiles/quantize-stats.dir/quantize-stats.cpp.obj -o bin\quantize-stats.exe -Xlinker /MANIFEST:EMBED -Xlinker /implib:examples\quantize-stats\quantize-stats.lib -Xlinker /pdb:bin\quantize-stats.pdb -Xlinker /version:0.0   llama.lib  --hip-link  --offload-arch=gfx1100  "C:/Program Files/AMD/ROCm/5.7/lib/hipblas.lib"  "C:/Program Files/AMD/ROCm/5.7/lib/rocblas.lib"  "C:/Program Files/AMD/ROCm/5.7/lib/clang/17.0.0/lib/windows/clang_rt.builtins-x86_64.lib"  "C:/Program Files/AMD/ROCm/5.7/lib/amdhip64.lib"  -lkernel32 -luser32 -lgdi32 -lwinspool -lshell32 -lole32 -loleaut32 -luuid -lcomdlg32 -ladvapi32 -loldnames  && cd ."
lld-link: error: undefined symbol: __declspec(dllimport) omp_get_max_threads
>>> referenced by llama.lib(ggml.c.obj):(ggml_graph_compute)
>>> referenced by llama.lib(ggml.c.obj):(ggml_graph_compute)

lld-link: error: undefined symbol: __kmpc_global_thread_num
>>> referenced by llama.lib(ggml.c.obj):(ggml_graph_compute)

lld-link: error: undefined symbol: __kmpc_push_num_threads
>>> referenced by llama.lib(ggml.c.obj):(ggml_graph_compute)

lld-link: error: undefined symbol: __kmpc_fork_call
>>> referenced by llama.lib(ggml.c.obj):(ggml_graph_compute)

lld-link: error: undefined symbol: __kmpc_single
>>> referenced by llama.lib(ggml.c.obj):(.omp_outlined.)

lld-link: error: undefined symbol: __declspec(dllimport) omp_get_num_threads
>>> referenced by llama.lib(ggml.c.obj):(.omp_outlined.)

lld-link: error: undefined symbol: __kmpc_end_single
>>> referenced by llama.lib(ggml.c.obj):(.omp_outlined.)

lld-link: error: undefined symbol: __kmpc_barrier
>>> referenced by llama.lib(ggml.c.obj):(.omp_outlined.)

lld-link: error: undefined symbol: __declspec(dllimport) omp_get_thread_num
>>> referenced by llama.lib(ggml.c.obj):(.omp_outlined.)
CLANG_~1: error: linker command failed with exit code 1 (use -v to see invocation)
[4/71] Linking CXX executable bin\gguf.exe
FAILED: bin/gguf.exe
cmd.exe /C "cd . && C:\PROGRA~1\AMD\ROCm\5.7\bin\CLANG_~1.EXE -fuse-ld=lld-link -nostartfiles -nostdlib -O3 -DNDEBUG -D_DLL -D_MT -Xclang --dependent-lib=msvcrt -Xlinker /subsystem:console CMakeFiles/ggml.dir/ggml.c.obj CMakeFiles/ggml.dir/ggml-alloc.c.obj CMakeFiles/ggml.dir/ggml-backend.c.obj CMakeFiles/ggml.dir/ggml-quants.c.obj CMakeFiles/ggml.dir/ggml-cuda/acc.cu.obj CMakeFiles/ggml.dir/ggml-cuda/arange.cu.obj CMakeFiles/ggml.dir/ggml-cuda/argsort.cu.obj CMakeFiles/ggml.dir/ggml-cuda/binbcast.cu.obj CMakeFiles/ggml.dir/ggml-cuda/clamp.cu.obj CMakeFiles/ggml.dir/ggml-cuda/concat.cu.obj CMakeFiles/ggml.dir/ggml-cuda/convert.cu.obj CMakeFiles/ggml.dir/ggml-cuda/cpy.cu.obj CMakeFiles/ggml.dir/ggml-cuda/diagmask.cu.obj CMakeFiles/ggml.dir/ggml-cuda/dmmv.cu.obj CMakeFiles/ggml.dir/ggml-cuda/fattn-tile-f16.cu.obj CMakeFiles/ggml.dir/ggml-cuda/fattn-tile-f32.cu.obj CMakeFiles/ggml.dir/ggml-cuda/fattn.cu.obj CMakeFiles/ggml.dir/ggml-cuda/getrows.cu.obj CMakeFiles/ggml.dir/ggml-cuda/im2col.cu.obj CMakeFiles/ggml.dir/ggml-cuda/mmq.cu.obj CMakeFiles/ggml.dir/ggml-cuda/mmvq.cu.obj CMakeFiles/ggml.dir/ggml-cuda/norm.cu.obj CMakeFiles/ggml.dir/ggml-cuda/pad.cu.obj CMakeFiles/ggml.dir/ggml-cuda/pool2d.cu.obj CMakeFiles/ggml.dir/ggml-cuda/quantize.cu.obj CMakeFiles/ggml.dir/ggml-cuda/rope.cu.obj CMakeFiles/ggml.dir/ggml-cuda/scale.cu.obj CMakeFiles/ggml.dir/ggml-cuda/softmax.cu.obj CMakeFiles/ggml.dir/ggml-cuda/sumrows.cu.obj CMakeFiles/ggml.dir/ggml-cuda/tsembd.cu.obj CMakeFiles/ggml.dir/ggml-cuda/unary.cu.obj CMakeFiles/ggml.dir/ggml-cuda/upscale.cu.obj CMakeFiles/ggml.dir/ggml-cuda.cu.obj CMakeFiles/ggml.dir/ggml-cuda/template-instances/fattn-wmma-f16-instance-kqfloat-cpb16.cu.obj CMakeFiles/ggml.dir/ggml-cuda/template-instances/fattn-wmma-f16-instance-kqfloat-cpb32.cu.obj CMakeFiles/ggml.dir/ggml-cuda/template-instances/fattn-wmma-f16-instance-kqhalf-cpb16.cu.obj CMakeFiles/ggml.dir/ggml-cuda/template-instances/fattn-wmma-f16-instance-kqhalf-cpb32.cu.obj CMakeFiles/ggml.dir/ggml-cuda/template-instances/fattn-wmma-f16-instance-kqhalf-cpb8.cu.obj CMakeFiles/ggml.dir/ggml-cuda/template-instances/fattn-vec-f16-instance-hs128-q4_0-q4_0.cu.obj CMakeFiles/ggml.dir/ggml-cuda/template-instances/fattn-vec-f32-instance-hs128-q4_0-q4_0.cu.obj CMakeFiles/ggml.dir/ggml-cuda/template-instances/fattn-vec-f16-instance-hs128-q8_0-q8_0.cu.obj CMakeFiles/ggml.dir/ggml-cuda/template-instances/fattn-vec-f32-instance-hs128-q8_0-q8_0.cu.obj CMakeFiles/ggml.dir/ggml-cuda/template-instances/fattn-vec-f16-instance-hs128-f16-f16.cu.obj CMakeFiles/ggml.dir/ggml-cuda/template-instances/fattn-vec-f16-instance-hs256-f16-f16.cu.obj CMakeFiles/ggml.dir/ggml-cuda/template-instances/fattn-vec-f16-instance-hs64-f16-f16.cu.obj CMakeFiles/ggml.dir/ggml-cuda/template-instances/fattn-vec-f32-instance-hs128-f16-f16.cu.obj CMakeFiles/ggml.dir/ggml-cuda/template-instances/fattn-vec-f32-instance-hs256-f16-f16.cu.obj CMakeFiles/ggml.dir/ggml-cuda/template-instances/fattn-vec-f32-instance-hs64-f16-f16.cu.obj CMakeFiles/ggml.dir/sgemm.cpp.obj examples/gguf/CMakeFiles/gguf.dir/gguf.cpp.obj -o bin\gguf.exe -Xlinker /MANIFEST:EMBED -Xlinker /implib:examples\gguf\gguf.lib -Xlinker /pdb:bin\gguf.pdb -Xlinker /version:0.0   --hip-link  --offload-arch=gfx1100  "C:/Program Files/AMD/ROCm/5.7/lib/hipblas.lib"  "C:/Program Files/AMD/ROCm/5.7/lib/rocblas.lib"  "C:/Program Files/AMD/ROCm/5.7/lib/clang/17.0.0/lib/windows/clang_rt.builtins-x86_64.lib"  "C:/Program Files/AMD/ROCm/5.7/lib/amdhip64.lib"  -lkernel32 -luser32 -lgdi32 -lwinspool -lshell32 -lole32 -loleaut32 -luuid -lcomdlg32 -ladvapi32 -loldnames  && cd ."
lld-link: error: undefined symbol: __declspec(dllimport) omp_get_max_threads
>>> referenced by CMakeFiles/ggml.dir/ggml.c.obj:(ggml_graph_compute)
>>> referenced by CMakeFiles/ggml.dir/ggml.c.obj:(ggml_graph_compute)

lld-link: error: undefined symbol: __kmpc_global_thread_num
>>> referenced by CMakeFiles/ggml.dir/ggml.c.obj:(ggml_graph_compute)

lld-link: error: undefined symbol: __kmpc_push_num_threads
>>> referenced by CMakeFiles/ggml.dir/ggml.c.obj:(ggml_graph_compute)

lld-link: error: undefined symbol: __kmpc_fork_call
>>> referenced by CMakeFiles/ggml.dir/ggml.c.obj:(ggml_graph_compute)

lld-link: error: undefined symbol: __kmpc_single
>>> referenced by CMakeFiles/ggml.dir/ggml.c.obj:(.omp_outlined.)

lld-link: error: undefined symbol: __declspec(dllimport) omp_get_num_threads
>>> referenced by CMakeFiles/ggml.dir/ggml.c.obj:(.omp_outlined.)

lld-link: error: undefined symbol: __kmpc_end_single
>>> referenced by CMakeFiles/ggml.dir/ggml.c.obj:(.omp_outlined.)

lld-link: error: undefined symbol: __kmpc_barrier
>>> referenced by CMakeFiles/ggml.dir/ggml.c.obj:(.omp_outlined.)

lld-link: error: undefined symbol: __declspec(dllimport) omp_get_thread_num
>>> referenced by CMakeFiles/ggml.dir/ggml.c.obj:(.omp_outlined.)
CLANG_~1: error: linker command failed with exit code 1 (use -v to see invocation)
[5/71] Linking CXX static library common\common.lib
ninja: build stopped: subcommand failed.



cmake --build . -j 99 --config Release
[2/71] Linking CXX executable bin\benchmark.exe
FAILED: bin/benchmark.exe
cmd.exe /C "cd . && C:\PROGRA~1\AMD\ROCm\5.7\bin\CLANG_~1.EXE -fuse-ld=lld-link -nostartfiles -nostdlib -O3 -DNDEBUG -D_DLL -D_MT -Xclang --dependent-lib=msvcrt -Xlinker /subsystem:console common/CMakeFiles/build_info.dir/build-info.cpp.obj examples/benchmark/CMakeFiles/benchmark.dir/benchmark-matmult.cpp.obj -o bin\benchmark.exe -Xlinker /MANIFEST:EMBED -Xlinker /implib:examples\benchmark\benchmark.lib -Xlinker /pdb:bin\benchmark.pdb -Xlinker /version:0.0   llama.lib  --hip-link  --offload-arch=gfx1100  "C:/Program Files/AMD/ROCm/5.7/lib/hipblas.lib"  "C:/Program Files/AMD/ROCm/5.7/lib/rocblas.lib"  "C:/Program Files/AMD/ROCm/5.7/lib/clang/17.0.0/lib/windows/clang_rt.builtins-x86_64.lib"  "C:/Program Files/AMD/ROCm/5.7/lib/amdhip64.lib"  -lkernel32 -luser32 -lgdi32 -lwinspool -lshell32 -lole32 -loleaut32 -luuid -lcomdlg32 -ladvapi32 -loldnames  && cd ."
lld-link: error: undefined symbol: __declspec(dllimport) omp_get_max_threads
>>> referenced by llama.lib(ggml.c.obj):(ggml_graph_compute)
>>> referenced by llama.lib(ggml.c.obj):(ggml_graph_compute)

lld-link: error: undefined symbol: __kmpc_global_thread_num
>>> referenced by llama.lib(ggml.c.obj):(ggml_graph_compute)

lld-link: error: undefined symbol: __kmpc_push_num_threads
>>> referenced by llama.lib(ggml.c.obj):(ggml_graph_compute)

lld-link: error: undefined symbol: __kmpc_fork_call
>>> referenced by llama.lib(ggml.c.obj):(ggml_graph_compute)

lld-link: error: undefined symbol: __kmpc_single
>>> referenced by llama.lib(ggml.c.obj):(.omp_outlined.)

lld-link: error: undefined symbol: __declspec(dllimport) omp_get_num_threads
>>> referenced by llama.lib(ggml.c.obj):(.omp_outlined.)

lld-link: error: undefined symbol: __kmpc_end_single
>>> referenced by llama.lib(ggml.c.obj):(.omp_outlined.)

lld-link: error: undefined symbol: __kmpc_barrier
>>> referenced by llama.lib(ggml.c.obj):(.omp_outlined.)

lld-link: error: undefined symbol: __declspec(dllimport) omp_get_thread_num
>>> referenced by llama.lib(ggml.c.obj):(.omp_outlined.)
CLANG_~1: error: linker command failed with exit code 1 (use -v to see invocation)
[3/71] Linking CXX executable bin\quantize-stats.exe
FAILED: bin/quantize-stats.exe
cmd.exe /C "cd . && C:\PROGRA~1\AMD\ROCm\5.7\bin\CLANG_~1.EXE -fuse-ld=lld-link -nostartfiles -nostdlib -O3 -DNDEBUG -D_DLL -D_MT -Xclang --dependent-lib=msvcrt -Xlinker /subsystem:console common/CMakeFiles/build_info.dir/build-info.cpp.obj examples/quantize-stats/CMakeFiles/quantize-stats.dir/quantize-stats.cpp.obj -o bin\quantize-stats.exe -Xlinker /MANIFEST:EMBED -Xlinker /implib:examples\quantize-stats\quantize-stats.lib -Xlinker /pdb:bin\quantize-stats.pdb -Xlinker /version:0.0   llama.lib  --hip-link  --offload-arch=gfx1100  "C:/Program Files/AMD/ROCm/5.7/lib/hipblas.lib"  "C:/Program Files/AMD/ROCm/5.7/lib/rocblas.lib"  "C:/Program Files/AMD/ROCm/5.7/lib/clang/17.0.0/lib/windows/clang_rt.builtins-x86_64.lib"  "C:/Program Files/AMD/ROCm/5.7/lib/amdhip64.lib"  -lkernel32 -luser32 -lgdi32 -lwinspool -lshell32 -lole32 -loleaut32 -luuid -lcomdlg32 -ladvapi32 -loldnames  && cd ."
lld-link: error: undefined symbol: __declspec(dllimport) omp_get_max_threads
>>> referenced by llama.lib(ggml.c.obj):(ggml_graph_compute)
>>> referenced by llama.lib(ggml.c.obj):(ggml_graph_compute)

lld-link: error: undefined symbol: __kmpc_global_thread_num
>>> referenced by llama.lib(ggml.c.obj):(ggml_graph_compute)

lld-link: error: undefined symbol: __kmpc_push_num_threads
>>> referenced by llama.lib(ggml.c.obj):(ggml_graph_compute)

lld-link: error: undefined symbol: __kmpc_fork_call
>>> referenced by llama.lib(ggml.c.obj):(ggml_graph_compute)

lld-link: error: undefined symbol: __kmpc_single
>>> referenced by llama.lib(ggml.c.obj):(.omp_outlined.)

lld-link: error: undefined symbol: __declspec(dllimport) omp_get_num_threads
>>> referenced by llama.lib(ggml.c.obj):(.omp_outlined.)

lld-link: error: undefined symbol: __kmpc_end_single
>>> referenced by llama.lib(ggml.c.obj):(.omp_outlined.)

lld-link: error: undefined symbol: __kmpc_barrier
>>> referenced by llama.lib(ggml.c.obj):(.omp_outlined.)

lld-link: error: undefined symbol: __declspec(dllimport) omp_get_thread_num
>>> referenced by llama.lib(ggml.c.obj):(.omp_outlined.)
CLANG_~1: error: linker command failed with exit code 1 (use -v to see invocation)
[4/71] Linking CXX executable bin\gguf.exe
FAILED: bin/gguf.exe
cmd.exe /C "cd . && C:\PROGRA~1\AMD\ROCm\5.7\bin\CLANG_~1.EXE -fuse-ld=lld-link -nostartfiles -nostdlib -O3 -DNDEBUG -D_DLL -D_MT -Xclang --dependent-lib=msvcrt -Xlinker /subsystem:console CMakeFiles/ggml.dir/ggml.c.obj CMakeFiles/ggml.dir/ggml-alloc.c.obj CMakeFiles/ggml.dir/ggml-backend.c.obj CMakeFiles/ggml.dir/ggml-quants.c.obj CMakeFiles/ggml.dir/ggml-cuda/acc.cu.obj CMakeFiles/ggml.dir/ggml-cuda/arange.cu.obj CMakeFiles/ggml.dir/ggml-cuda/argsort.cu.obj CMakeFiles/ggml.dir/ggml-cuda/binbcast.cu.obj CMakeFiles/ggml.dir/ggml-cuda/clamp.cu.obj CMakeFiles/ggml.dir/ggml-cuda/concat.cu.obj CMakeFiles/ggml.dir/ggml-cuda/convert.cu.obj CMakeFiles/ggml.dir/ggml-cuda/cpy.cu.obj CMakeFiles/ggml.dir/ggml-cuda/diagmask.cu.obj CMakeFiles/ggml.dir/ggml-cuda/dmmv.cu.obj CMakeFiles/ggml.dir/ggml-cuda/fattn-tile-f16.cu.obj CMakeFiles/ggml.dir/ggml-cuda/fattn-tile-f32.cu.obj CMakeFiles/ggml.dir/ggml-cuda/fattn.cu.obj CMakeFiles/ggml.dir/ggml-cuda/getrows.cu.obj CMakeFiles/ggml.dir/ggml-cuda/im2col.cu.obj CMakeFiles/ggml.dir/ggml-cuda/mmq.cu.obj CMakeFiles/ggml.dir/ggml-cuda/mmvq.cu.obj CMakeFiles/ggml.dir/ggml-cuda/norm.cu.obj CMakeFiles/ggml.dir/ggml-cuda/pad.cu.obj CMakeFiles/ggml.dir/ggml-cuda/pool2d.cu.obj CMakeFiles/ggml.dir/ggml-cuda/quantize.cu.obj CMakeFiles/ggml.dir/ggml-cuda/rope.cu.obj CMakeFiles/ggml.dir/ggml-cuda/scale.cu.obj CMakeFiles/ggml.dir/ggml-cuda/softmax.cu.obj CMakeFiles/ggml.dir/ggml-cuda/sumrows.cu.obj CMakeFiles/ggml.dir/ggml-cuda/tsembd.cu.obj CMakeFiles/ggml.dir/ggml-cuda/unary.cu.obj CMakeFiles/ggml.dir/ggml-cuda/upscale.cu.obj CMakeFiles/ggml.dir/ggml-cuda.cu.obj CMakeFiles/ggml.dir/ggml-cuda/template-instances/fattn-wmma-f16-instance-kqfloat-cpb16.cu.obj CMakeFiles/ggml.dir/ggml-cuda/template-instances/fattn-wmma-f16-instance-kqfloat-cpb32.cu.obj CMakeFiles/ggml.dir/ggml-cuda/template-instances/fattn-wmma-f16-instance-kqhalf-cpb16.cu.obj CMakeFiles/ggml.dir/ggml-cuda/template-instances/fattn-wmma-f16-instance-kqhalf-cpb32.cu.obj CMakeFiles/ggml.dir/ggml-cuda/template-instances/fattn-wmma-f16-instance-kqhalf-cpb8.cu.obj CMakeFiles/ggml.dir/ggml-cuda/template-instances/fattn-vec-f16-instance-hs128-q4_0-q4_0.cu.obj CMakeFiles/ggml.dir/ggml-cuda/template-instances/fattn-vec-f32-instance-hs128-q4_0-q4_0.cu.obj CMakeFiles/ggml.dir/ggml-cuda/template-instances/fattn-vec-f16-instance-hs128-q8_0-q8_0.cu.obj CMakeFiles/ggml.dir/ggml-cuda/template-instances/fattn-vec-f32-instance-hs128-q8_0-q8_0.cu.obj CMakeFiles/ggml.dir/ggml-cuda/template-instances/fattn-vec-f16-instance-hs128-f16-f16.cu.obj CMakeFiles/ggml.dir/ggml-cuda/template-instances/fattn-vec-f16-instance-hs256-f16-f16.cu.obj CMakeFiles/ggml.dir/ggml-cuda/template-instances/fattn-vec-f16-instance-hs64-f16-f16.cu.obj CMakeFiles/ggml.dir/ggml-cuda/template-instances/fattn-vec-f32-instance-hs128-f16-f16.cu.obj CMakeFiles/ggml.dir/ggml-cuda/template-instances/fattn-vec-f32-instance-hs256-f16-f16.cu.obj CMakeFiles/ggml.dir/ggml-cuda/template-instances/fattn-vec-f32-instance-hs64-f16-f16.cu.obj CMakeFiles/ggml.dir/sgemm.cpp.obj examples/gguf/CMakeFiles/gguf.dir/gguf.cpp.obj -o bin\gguf.exe -Xlinker /MANIFEST:EMBED -Xlinker /implib:examples\gguf\gguf.lib -Xlinker /pdb:bin\gguf.pdb -Xlinker /version:0.0   --hip-link  --offload-arch=gfx1100  "C:/Program Files/AMD/ROCm/5.7/lib/hipblas.lib"  "C:/Program Files/AMD/ROCm/5.7/lib/rocblas.lib"  "C:/Program Files/AMD/ROCm/5.7/lib/clang/17.0.0/lib/windows/clang_rt.builtins-x86_64.lib"  "C:/Program Files/AMD/ROCm/5.7/lib/amdhip64.lib"  -lkernel32 -luser32 -lgdi32 -lwinspool -lshell32 -lole32 -loleaut32 -luuid -lcomdlg32 -ladvapi32 -loldnames  && cd ."
lld-link: error: undefined symbol: __declspec(dllimport) omp_get_max_threads
>>> referenced by CMakeFiles/ggml.dir/ggml.c.obj:(ggml_graph_compute)
>>> referenced by CMakeFiles/ggml.dir/ggml.c.obj:(ggml_graph_compute)

lld-link: error: undefined symbol: __kmpc_global_thread_num
>>> referenced by CMakeFiles/ggml.dir/ggml.c.obj:(ggml_graph_compute)

lld-link: error: undefined symbol: __kmpc_push_num_threads
>>> referenced by CMakeFiles/ggml.dir/ggml.c.obj:(ggml_graph_compute)

lld-link: error: undefined symbol: __kmpc_fork_call
>>> referenced by CMakeFiles/ggml.dir/ggml.c.obj:(ggml_graph_compute)

lld-link: error: undefined symbol: __kmpc_single
>>> referenced by CMakeFiles/ggml.dir/ggml.c.obj:(.omp_outlined.)

lld-link: error: undefined symbol: __declspec(dllimport) omp_get_num_threads
>>> referenced by CMakeFiles/ggml.dir/ggml.c.obj:(.omp_outlined.)

lld-link: error: undefined symbol: __kmpc_end_single
>>> referenced by CMakeFiles/ggml.dir/ggml.c.obj:(.omp_outlined.)

lld-link: error: undefined symbol: __kmpc_barrier
>>> referenced by CMakeFiles/ggml.dir/ggml.c.obj:(.omp_outlined.)

lld-link: error: undefined symbol: __declspec(dllimport) omp_get_thread_num
>>> referenced by CMakeFiles/ggml.dir/ggml.c.obj:(.omp_outlined.)
CLANG_~1: error: linker command failed with exit code 1 (use -v to see invocation)
[5/71] Linking CXX static library common\common.lib
ninja: build stopped: subcommand failed.
@sorasoras sorasoras added bug-unconfirmed high severity Used to report high severity bugs in llama.cpp (Malfunctioning hinder important workflow) labels Jun 4, 2024
@slaren
Copy link
Collaborator

slaren commented Jun 4, 2024

Try deleting the build directory and re-configure cmake from scratch. OpenMP should only be used if cmake detects that it is supported in the system. If that still doesn't work, configure with -DLLAMA_OPENMP=OFF to disable OpenMP.

@sorasoras
Copy link
Author

Try deleting the build directory and re-configure cmake from scratch. OpenMP should only be used if cmake detects that it is supported in the system. If that still doesn't work, configure with -DLLAMA_OPENOP=OFF to disable OpenMP.

cmake .. -G "Ninja" -DCMAKE_BUILD_TYPE=Release -DLLAMA_HIPBLAS=ON -DLLAMA_OPENOP=OFF -DCMAKE_C_COMPILER="C:/Program Files/AMD/ROCm/5.7/bin/clang.exe" -DCMAKE_CXX_COMPILER="C:/Program Files/AMD/ROCm/5.7/bin/clang++.exe" -DAMDGPU_TARGETS="gfx1100"

CMake Warning:
Manually-specified variables were not used by the project:

LLAMA_OPENOP

I remove everything in the cmake folder, it does not help.
I cannot configure for openop

@slaren
Copy link
Collaborator

slaren commented Jun 4, 2024

Sorry, the correct flag is -DLLAMA_OPENMP=OFF.

@sorasoras
Copy link
Author

Sorry, the correct flag is -DLLAMA_OPENMP=OFF.

ye, I think it work with openmp off

@sorasoras
Copy link
Author

Sorry, the correct flag is -DLLAMA_OPENMP=OFF.

It seems like OpenMP is broken for rocm on windows even cmake said it support it my both of my system.

@slaren
Copy link
Collaborator

slaren commented Jun 4, 2024

It might work on a more recent version of ROCm, but it won't make a difference unless you are partially offloading a model or using nkvo.

@SteelPh0enix
Copy link

I can confirm this issue happens in my case too.
The scenario is exactly the same as in the issue - ROCm 5.7, clean llama.cpp clone, can't build because of missing OpenMP symbols.

Maybe, perhaps, -fopenmp flag missing in CMake?
https://stackoverflow.com/questions/47303810/how-to-get-clang-with-openmp-working-on-msvc-2015

@Skyrion9
Copy link

Skyrion9 commented Jun 27, 2024

Use DGGML_OPENMP=OFF

E.g.

set PATH=%HIP_PATH%\bin;%PATH%
cmake -S . -B build -G Ninja -DGGML_OPENMP=OFF -DAMDGPU_TARGETS=gfx1030 -DGGML_HIPBLAS=ON -DCMAKE_C_COMPILER=clang -DCMAKE_CXX_COMPILER=clang++ -DCMAKE_BUILD_TYPE=Release
cmake --build build

This has compiled for me with x64 Native VS 2022 on current master. Change target to whatever you got.

cocochick added a commit to cocochick/llama.cpp that referenced this issue Nov 15, 2024
 (ggerganov#9666)

Fix the compilation error "call to undeclared function '_mm256_dpbusd_epi32'". The function _mm256_dpbusd_epi32 is defined in avxintrin.h, while _mm256_dpbusd_epi32 is defined in avx512vlvnniintrin.h. Therefore, __AVX__, __AVX512VNNI__, and __AVX512VL__ need to be defined.

According to (ggerganov#7743), DGGML_OPENMP=OFF is needed to add, so adding it in doc.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug-unconfirmed high severity Used to report high severity bugs in llama.cpp (Malfunctioning hinder important workflow)
Projects
None yet
Development

No branches or pull requests

4 participants