Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Failed to build llama_cpp_python? #223

Open
xxl2005 opened this issue Jan 27, 2025 · 5 comments
Open

Failed to build llama_cpp_python? #223

xxl2005 opened this issue Jan 27, 2025 · 5 comments

Comments

@xxl2005
Copy link

xxl2005 commented Jan 27, 2025

hello

install problem?

`Building wheels for collected packages: llama_cpp_python
Building wheel for llama_cpp_python (pyproject.toml): started
Building wheel for llama_cpp_python (pyproject.toml): finished with status 'error'
Failed to build llama_cpp_python

stderr: error: subprocess-exited-with-error

× Building wheel for llama_cpp_python (pyproject.toml) did not run successfully.
│ exit code: 1
╰─> [114 lines of output]
*** scikit-build-core 0.10.7 using CMake 3.31.2 (wheel)
*** Configuring CMake...
loading initial cache file /tmp/tmpg9k8bu4_/build/CMakeInit.txt
-- The C compiler identification is GNU 14.2.1
-- The CXX compiler identification is GNU 14.2.1
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working C compiler: /usr/bin/gcc - skipped
-- Detecting C compile features
-- Detecting C compile features - done
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Check for working CXX compiler: /usr/bin/g++ - skipped
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Found Git: /usr/bin/git (found version "2.47.1")
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Success
-- Found Threads: TRUE
-- Found OpenMP_C: -fopenmp (found version "4.5")
-- Found OpenMP_CXX: -fopenmp (found version "4.5")
-- Found OpenMP: TRUE (found version "4.5")
-- OpenMP found
-- Using llamafile
-- Warning: ccache not found - consider installing it for faster compilation or disable this warning with GGML_CCACHE=OFF
-- CMAKE_SYSTEM_PROCESSOR: x86_64
-- x86 detected
CMake Warning (dev) at CMakeLists.txt:9 (install):
Target llama has PUBLIC_HEADER files but no PUBLIC_HEADER DESTINATION.
Call Stack (most recent call first):
CMakeLists.txt:73 (llama_cpp_python_install_target)
This warning is for project developers. Use -Wno-dev to suppress it.

  CMake Warning (dev) at CMakeLists.txt:17 (install):
    Target llama has PUBLIC_HEADER files but no PUBLIC_HEADER DESTINATION.
  Call Stack (most recent call first):
    CMakeLists.txt:73 (llama_cpp_python_install_target)
  This warning is for project developers.  Use -Wno-dev to suppress it.
  
  CMake Warning (dev) at CMakeLists.txt:9 (install):
    Target ggml has PUBLIC_HEADER files but no PUBLIC_HEADER DESTINATION.
  Call Stack (most recent call first):
    CMakeLists.txt:74 (llama_cpp_python_install_target)
  This warning is for project developers.  Use -Wno-dev to suppress it.
  
  CMake Warning (dev) at CMakeLists.txt:17 (install):
    Target ggml has PUBLIC_HEADER files but no PUBLIC_HEADER DESTINATION.
  Call Stack (most recent call first):
    CMakeLists.txt:74 (llama_cpp_python_install_target)
  This warning is for project developers.  Use -Wno-dev to suppress it.
  
  -- Configuring done (0.9s)
  -- Generating done (0.0s)
  -- Build files have been written to: /tmp/tmpg9k8bu4_/build
  *** Building project with Ninja...
  Change Dir: '/tmp/tmpg9k8bu4_/build'
  
  Run Build Command(s): /usr/bin/ninja -v
  [1/33] cd /tmp/pip-install-x1jkmplm/llama-cpp-python_71ad5501b0814bc7ab361aa0f606d9c2/vendor/llama.cpp && /usr/bin/cmake -DMSVC= -DCMAKE_C_COMPILER_VERSION=14.2.1 -DCMAKE_C_COMPILER_ID=GNU -DCMAKE_VS_PLATFORM_NAME= -DCMAKE_C_COMPILER=/usr/bin/gcc -P /tmp/pip-install-x1jkmplm/llama-cpp-python_71ad5501b0814bc7ab361aa0f606d9c2/vendor/llama.cpp/common/cmake/build-info-gen-cpp.cmake
  -- Found Git: /usr/bin/git (found version "2.47.1")
  [2/33] /usr/bin/g++  -pthread -B /home/tobias/miniconda3/envs/RuinedFooocus/compiler_compat   -O3 -DNDEBUG -fPIC -MD -MT vendor/llama.cpp/common/CMakeFiles/build_info.dir/build-info.cpp.o -MF vendor/llama.cpp/common/CMakeFiles/build_info.dir/build-info.cpp.o.d -o vendor/llama.cpp/common/CMakeFiles/build_info.dir/build-info.cpp.o -c /tmp/pip-install-x1jkmplm/llama-cpp-python_71ad5501b0814bc7ab361aa0f606d9c2/vendor/llama.cpp/common/build-info.cpp
  [3/33] /usr/bin/gcc  -pthread -B /home/tobias/miniconda3/envs/RuinedFooocus/compiler_compat -DGGML_BUILD -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_USE_LLAMAFILE -DGGML_USE_OPENMP -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_EXPORTS -I/tmp/pip-install-x1jkmplm/llama-cpp-python_71ad5501b0814bc7ab361aa0f606d9c2/vendor/llama.cpp/ggml/src/../include -I/tmp/pip-install-x1jkmplm/llama-cpp-python_71ad5501b0814bc7ab361aa0f606d9c2/vendor/llama.cpp/ggml/src/. -O3 -DNDEBUG -std=gnu11 -fPIC -Wshadow -Wstrict-prototypes -Wpointer-arith -Wmissing-prototypes -Werror=implicit-int -Werror=implicit-function-declaration -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wdouble-promotion -march=native -fopenmp -MD -MT vendor/llama.cpp/ggml/src/CMakeFiles/ggml.dir/ggml-alloc.c.o -MF vendor/llama.cpp/ggml/src/CMakeFiles/ggml.dir/ggml-alloc.c.o.d -o vendor/llama.cpp/ggml/src/CMakeFiles/ggml.dir/ggml-alloc.c.o -c /tmp/pip-install-x1jkmplm/llama-cpp-python_71ad5501b0814bc7ab361aa0f606d9c2/vendor/llama.cpp/ggml/src/ggml-alloc.c
  [4/33] /usr/bin/g++  -pthread -B /home/tobias/miniconda3/envs/RuinedFooocus/compiler_compat -DLLAMA_BUILD -DLLAMA_SHARED -I/tmp/pip-install-x1jkmplm/llama-cpp-python_71ad5501b0814bc7ab361aa0f606d9c2/vendor/llama.cpp/examples/llava/. -I/tmp/pip-install-x1jkmplm/llama-cpp-python_71ad5501b0814bc7ab361aa0f606d9c2/vendor/llama.cpp/examples/llava/../.. -I/tmp/pip-install-x1jkmplm/llama-cpp-python_71ad5501b0814bc7ab361aa0f606d9c2/vendor/llama.cpp/examples/llava/../../common -I/tmp/pip-install-x1jkmplm/llama-cpp-python_71ad5501b0814bc7ab361aa0f606d9c2/vendor/llama.cpp/ggml/src/../include -I/tmp/pip-install-x1jkmplm/llama-cpp-python_71ad5501b0814bc7ab361aa0f606d9c2/vendor/llama.cpp/src/. -I/tmp/pip-install-x1jkmplm/llama-cpp-python_71ad5501b0814bc7ab361aa0f606d9c2/vendor/llama.cpp/src/../include -O3 -DNDEBUG -fPIC -Wno-cast-qual -MD -MT vendor/llama.cpp/examples/llava/CMakeFiles/llava.dir/llava.cpp.o -MF vendor/llama.cpp/examples/llava/CMakeFiles/llava.dir/llava.cpp.o.d -o vendor/llama.cpp/examples/llava/CMakeFiles/llava.dir/llava.cpp.o -c /tmp/pip-install-x1jkmplm/llama-cpp-python_71ad5501b0814bc7ab361aa0f606d9c2/vendor/llama.cpp/examples/llava/llava.cpp
  [5/33] /usr/bin/g++  -pthread -B /home/tobias/miniconda3/envs/RuinedFooocus/compiler_compat  -I/tmp/pip-install-x1jkmplm/llama-cpp-python_71ad5501b0814bc7ab361aa0f606d9c2/vendor/llama.cpp/common/. -I/tmp/pip-install-x1jkmplm/llama-cpp-python_71ad5501b0814bc7ab361aa0f606d9c2/vendor/llama.cpp/src/. -I/tmp/pip-install-x1jkmplm/llama-cpp-python_71ad5501b0814bc7ab361aa0f606d9c2/vendor/llama.cpp/src/../include -I/tmp/pip-install-x1jkmplm/llama-cpp-python_71ad5501b0814bc7ab361aa0f606d9c2/vendor/llama.cpp/ggml/src/../include -O3 -DNDEBUG -fPIC -MD -MT vendor/llama.cpp/common/CMakeFiles/common.dir/console.cpp.o -MF vendor/llama.cpp/common/CMakeFiles/common.dir/console.cpp.o.d -o vendor/llama.cpp/common/CMakeFiles/common.dir/console.cpp.o -c /tmp/pip-install-x1jkmplm/llama-cpp-python_71ad5501b0814bc7ab361aa0f606d9c2/vendor/llama.cpp/common/console.cpp
  [6/33] /usr/bin/gcc  -pthread -B /home/tobias/miniconda3/envs/RuinedFooocus/compiler_compat -DGGML_BUILD -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_USE_LLAMAFILE -DGGML_USE_OPENMP -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_EXPORTS -I/tmp/pip-install-x1jkmplm/llama-cpp-python_71ad5501b0814bc7ab361aa0f606d9c2/vendor/llama.cpp/ggml/src/../include -I/tmp/pip-install-x1jkmplm/llama-cpp-python_71ad5501b0814bc7ab361aa0f606d9c2/vendor/llama.cpp/ggml/src/. -O3 -DNDEBUG -std=gnu11 -fPIC -Wshadow -Wstrict-prototypes -Wpointer-arith -Wmissing-prototypes -Werror=implicit-int -Werror=implicit-function-declaration -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wdouble-promotion -march=native -fopenmp -MD -MT vendor/llama.cpp/ggml/src/CMakeFiles/ggml.dir/ggml-backend.c.o -MF vendor/llama.cpp/ggml/src/CMakeFiles/ggml.dir/ggml-backend.c.o.d -o vendor/llama.cpp/ggml/src/CMakeFiles/ggml.dir/ggml-backend.c.o -c /tmp/pip-install-x1jkmplm/llama-cpp-python_71ad5501b0814bc7ab361aa0f606d9c2/vendor/llama.cpp/ggml/src/ggml-backend.c
  [7/33] /usr/bin/gcc  -pthread -B /home/tobias/miniconda3/envs/RuinedFooocus/compiler_compat -DGGML_BUILD -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_USE_LLAMAFILE -DGGML_USE_OPENMP -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_EXPORTS -I/tmp/pip-install-x1jkmplm/llama-cpp-python_71ad5501b0814bc7ab361aa0f606d9c2/vendor/llama.cpp/ggml/src/../include -I/tmp/pip-install-x1jkmplm/llama-cpp-python_71ad5501b0814bc7ab361aa0f606d9c2/vendor/llama.cpp/ggml/src/. -O3 -DNDEBUG -std=gnu11 -fPIC -Wshadow -Wstrict-prototypes -Wpointer-arith -Wmissing-prototypes -Werror=implicit-int -Werror=implicit-function-declaration -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wdouble-promotion -march=native -fopenmp -MD -MT vendor/llama.cpp/ggml/src/CMakeFiles/ggml.dir/ggml-aarch64.c.o -MF vendor/llama.cpp/ggml/src/CMakeFiles/ggml.dir/ggml-aarch64.c.o.d -o vendor/llama.cpp/ggml/src/CMakeFiles/ggml.dir/ggml-aarch64.c.o -c /tmp/pip-install-x1jkmplm/llama-cpp-python_71ad5501b0814bc7ab361aa0f606d9c2/vendor/llama.cpp/ggml/src/ggml-aarch64.c
  [8/33] /usr/bin/g++  -pthread -B /home/tobias/miniconda3/envs/RuinedFooocus/compiler_compat  -I/tmp/pip-install-x1jkmplm/llama-cpp-python_71ad5501b0814bc7ab361aa0f606d9c2/vendor/llama.cpp/common/. -I/tmp/pip-install-x1jkmplm/llama-cpp-python_71ad5501b0814bc7ab361aa0f606d9c2/vendor/llama.cpp/src/. -I/tmp/pip-install-x1jkmplm/llama-cpp-python_71ad5501b0814bc7ab361aa0f606d9c2/vendor/llama.cpp/src/../include -I/tmp/pip-install-x1jkmplm/llama-cpp-python_71ad5501b0814bc7ab361aa0f606d9c2/vendor/llama.cpp/ggml/src/../include -O3 -DNDEBUG -fPIC -MD -MT vendor/llama.cpp/common/CMakeFiles/common.dir/log.cpp.o -MF vendor/llama.cpp/common/CMakeFiles/common.dir/log.cpp.o.d -o vendor/llama.cpp/common/CMakeFiles/common.dir/log.cpp.o -c /tmp/pip-install-x1jkmplm/llama-cpp-python_71ad5501b0814bc7ab361aa0f606d9c2/vendor/llama.cpp/common/log.cpp
  [9/33] /usr/bin/g++  -pthread -B /home/tobias/miniconda3/envs/RuinedFooocus/compiler_compat -DLLAMA_BUILD -DLLAMA_SHARED -Dllama_EXPORTS -I/tmp/pip-install-x1jkmplm/llama-cpp-python_71ad5501b0814bc7ab361aa0f606d9c2/vendor/llama.cpp/src/. -I/tmp/pip-install-x1jkmplm/llama-cpp-python_71ad5501b0814bc7ab361aa0f606d9c2/vendor/llama.cpp/src/../include -I/tmp/pip-install-x1jkmplm/llama-cpp-python_71ad5501b0814bc7ab361aa0f606d9c2/vendor/llama.cpp/ggml/src/../include -O3 -DNDEBUG -fPIC -MD -MT vendor/llama.cpp/src/CMakeFiles/llama.dir/unicode-data.cpp.o -MF vendor/llama.cpp/src/CMakeFiles/llama.dir/unicode-data.cpp.o.d -o vendor/llama.cpp/src/CMakeFiles/llama.dir/unicode-data.cpp.o -c /tmp/pip-install-x1jkmplm/llama-cpp-python_71ad5501b0814bc7ab361aa0f606d9c2/vendor/llama.cpp/src/unicode-data.cpp
  [10/33] /usr/bin/g++  -pthread -B /home/tobias/miniconda3/envs/RuinedFooocus/compiler_compat  -I/tmp/pip-install-x1jkmplm/llama-cpp-python_71ad5501b0814bc7ab361aa0f606d9c2/vendor/llama.cpp/common/. -I/tmp/pip-install-x1jkmplm/llama-cpp-python_71ad5501b0814bc7ab361aa0f606d9c2/vendor/llama.cpp/src/. -I/tmp/pip-install-x1jkmplm/llama-cpp-python_71ad5501b0814bc7ab361aa0f606d9c2/vendor/llama.cpp/src/../include -I/tmp/pip-install-x1jkmplm/llama-cpp-python_71ad5501b0814bc7ab361aa0f606d9c2/vendor/llama.cpp/ggml/src/../include -O3 -DNDEBUG -fPIC -MD -MT vendor/llama.cpp/common/CMakeFiles/common.dir/ngram-cache.cpp.o -MF vendor/llama.cpp/common/CMakeFiles/common.dir/ngram-cache.cpp.o.d -o vendor/llama.cpp/common/CMakeFiles/common.dir/ngram-cache.cpp.o -c /tmp/pip-install-x1jkmplm/llama-cpp-python_71ad5501b0814bc7ab361aa0f606d9c2/vendor/llama.cpp/common/ngram-cache.cpp
  [11/33] /usr/bin/g++  -pthread -B /home/tobias/miniconda3/envs/RuinedFooocus/compiler_compat  -I/tmp/pip-install-x1jkmplm/llama-cpp-python_71ad5501b0814bc7ab361aa0f606d9c2/vendor/llama.cpp/common/. -I/tmp/pip-install-x1jkmplm/llama-cpp-python_71ad5501b0814bc7ab361aa0f606d9c2/vendor/llama.cpp/src/. -I/tmp/pip-install-x1jkmplm/llama-cpp-python_71ad5501b0814bc7ab361aa0f606d9c2/vendor/llama.cpp/src/../include -I/tmp/pip-install-x1jkmplm/llama-cpp-python_71ad5501b0814bc7ab361aa0f606d9c2/vendor/llama.cpp/ggml/src/../include -O3 -DNDEBUG -fPIC -MD -MT vendor/llama.cpp/common/CMakeFiles/common.dir/sampling.cpp.o -MF vendor/llama.cpp/common/CMakeFiles/common.dir/sampling.cpp.o.d -o vendor/llama.cpp/common/CMakeFiles/common.dir/sampling.cpp.o -c /tmp/pip-install-x1jkmplm/llama-cpp-python_71ad5501b0814bc7ab361aa0f606d9c2/vendor/llama.cpp/common/sampling.cpp
  [12/33] /usr/bin/g++  -pthread -B /home/tobias/miniconda3/envs/RuinedFooocus/compiler_compat  -I/tmp/pip-install-x1jkmplm/llama-cpp-python_71ad5501b0814bc7ab361aa0f606d9c2/vendor/llama.cpp/common/. -I/tmp/pip-install-x1jkmplm/llama-cpp-python_71ad5501b0814bc7ab361aa0f606d9c2/vendor/llama.cpp/src/. -I/tmp/pip-install-x1jkmplm/llama-cpp-python_71ad5501b0814bc7ab361aa0f606d9c2/vendor/llama.cpp/src/../include -I/tmp/pip-install-x1jkmplm/llama-cpp-python_71ad5501b0814bc7ab361aa0f606d9c2/vendor/llama.cpp/ggml/src/../include -I/tmp/pip-install-x1jkmplm/llama-cpp-python_71ad5501b0814bc7ab361aa0f606d9c2/vendor/llama.cpp/examples/llava/. -I/tmp/pip-install-x1jkmplm/llama-cpp-python_71ad5501b0814bc7ab361aa0f606d9c2/vendor/llama.cpp/examples/llava/../.. -I/tmp/pip-install-x1jkmplm/llama-cpp-python_71ad5501b0814bc7ab361aa0f606d9c2/vendor/llama.cpp/examples/llava/../../common -O3 -DNDEBUG -MD -MT vendor/llama.cpp/examples/llava/CMakeFiles/llama-llava-cli.dir/llava-cli.cpp.o -MF vendor/llama.cpp/examples/llava/CMakeFiles/llama-llava-cli.dir/llava-cli.cpp.o.d -o vendor/llama.cpp/examples/llava/CMakeFiles/llama-llava-cli.dir/llava-cli.cpp.o -c /tmp/pip-install-x1jkmplm/llama-cpp-python_71ad5501b0814bc7ab361aa0f606d9c2/vendor/llama.cpp/examples/llava/llava-cli.cpp
  [13/33] /usr/bin/g++  -pthread -B /home/tobias/miniconda3/envs/RuinedFooocus/compiler_compat -DLLAMA_BUILD -DLLAMA_SHARED -Dllama_EXPORTS -I/tmp/pip-install-x1jkmplm/llama-cpp-python_71ad5501b0814bc7ab361aa0f606d9c2/vendor/llama.cpp/src/. -I/tmp/pip-install-x1jkmplm/llama-cpp-python_71ad5501b0814bc7ab361aa0f606d9c2/vendor/llama.cpp/src/../include -I/tmp/pip-install-x1jkmplm/llama-cpp-python_71ad5501b0814bc7ab361aa0f606d9c2/vendor/llama.cpp/ggml/src/../include -O3 -DNDEBUG -fPIC -MD -MT vendor/llama.cpp/src/CMakeFiles/llama.dir/llama-sampling.cpp.o -MF vendor/llama.cpp/src/CMakeFiles/llama.dir/llama-sampling.cpp.o.d -o vendor/llama.cpp/src/CMakeFiles/llama.dir/llama-sampling.cpp.o -c /tmp/pip-install-x1jkmplm/llama-cpp-python_71ad5501b0814bc7ab361aa0f606d9c2/vendor/llama.cpp/src/llama-sampling.cpp
  [14/33] /usr/bin/g++  -pthread -B /home/tobias/miniconda3/envs/RuinedFooocus/compiler_compat  -I/tmp/pip-install-x1jkmplm/llama-cpp-python_71ad5501b0814bc7ab361aa0f606d9c2/vendor/llama.cpp/common/. -I/tmp/pip-install-x1jkmplm/llama-cpp-python_71ad5501b0814bc7ab361aa0f606d9c2/vendor/llama.cpp/src/. -I/tmp/pip-install-x1jkmplm/llama-cpp-python_71ad5501b0814bc7ab361aa0f606d9c2/vendor/llama.cpp/src/../include -I/tmp/pip-install-x1jkmplm/llama-cpp-python_71ad5501b0814bc7ab361aa0f606d9c2/vendor/llama.cpp/ggml/src/../include -I/tmp/pip-install-x1jkmplm/llama-cpp-python_71ad5501b0814bc7ab361aa0f606d9c2/vendor/llama.cpp/examples/llava/. -I/tmp/pip-install-x1jkmplm/llama-cpp-python_71ad5501b0814bc7ab361aa0f606d9c2/vendor/llama.cpp/examples/llava/../.. -I/tmp/pip-install-x1jkmplm/llama-cpp-python_71ad5501b0814bc7ab361aa0f606d9c2/vendor/llama.cpp/examples/llava/../../common -O3 -DNDEBUG -MD -MT vendor/llama.cpp/examples/llava/CMakeFiles/llama-minicpmv-cli.dir/minicpmv-cli.cpp.o -MF vendor/llama.cpp/examples/llava/CMakeFiles/llama-minicpmv-cli.dir/minicpmv-cli.cpp.o.d -o vendor/llama.cpp/examples/llava/CMakeFiles/llama-minicpmv-cli.dir/minicpmv-cli.cpp.o -c /tmp/pip-install-x1jkmplm/llama-cpp-python_71ad5501b0814bc7ab361aa0f606d9c2/vendor/llama.cpp/examples/llava/minicpmv-cli.cpp
  [15/33] /usr/bin/g++  -pthread -B /home/tobias/miniconda3/envs/RuinedFooocus/compiler_compat -DLLAMA_BUILD -DLLAMA_SHARED -Dllama_EXPORTS -I/tmp/pip-install-x1jkmplm/llama-cpp-python_71ad5501b0814bc7ab361aa0f606d9c2/vendor/llama.cpp/src/. -I/tmp/pip-install-x1jkmplm/llama-cpp-python_71ad5501b0814bc7ab361aa0f606d9c2/vendor/llama.cpp/src/../include -I/tmp/pip-install-x1jkmplm/llama-cpp-python_71ad5501b0814bc7ab361aa0f606d9c2/vendor/llama.cpp/ggml/src/../include -O3 -DNDEBUG -fPIC -MD -MT vendor/llama.cpp/src/CMakeFiles/llama.dir/llama-grammar.cpp.o -MF vendor/llama.cpp/src/CMakeFiles/llama.dir/llama-grammar.cpp.o.d -o vendor/llama.cpp/src/CMakeFiles/llama.dir/llama-grammar.cpp.o -c /tmp/pip-install-x1jkmplm/llama-cpp-python_71ad5501b0814bc7ab361aa0f606d9c2/vendor/llama.cpp/src/llama-grammar.cpp
  [16/33] /usr/bin/g++  -pthread -B /home/tobias/miniconda3/envs/RuinedFooocus/compiler_compat -DGGML_BUILD -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_USE_LLAMAFILE -DGGML_USE_OPENMP -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_EXPORTS -I/tmp/pip-install-x1jkmplm/llama-cpp-python_71ad5501b0814bc7ab361aa0f606d9c2/vendor/llama.cpp/ggml/src/../include -I/tmp/pip-install-x1jkmplm/llama-cpp-python_71ad5501b0814bc7ab361aa0f606d9c2/vendor/llama.cpp/ggml/src/. -O3 -DNDEBUG -std=gnu++11 -fPIC -Wmissing-declarations -Wmissing-noreturn -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wno-array-bounds -Wno-format-truncation -Wextra-semi -march=native -fopenmp -MD -MT vendor/llama.cpp/ggml/src/CMakeFiles/ggml.dir/llamafile/sgemm.cpp.o -MF vendor/llama.cpp/ggml/src/CMakeFiles/ggml.dir/llamafile/sgemm.cpp.o.d -o vendor/llama.cpp/ggml/src/CMakeFiles/ggml.dir/llamafile/sgemm.cpp.o -c /tmp/pip-install-x1jkmplm/llama-cpp-python_71ad5501b0814bc7ab361aa0f606d9c2/vendor/llama.cpp/ggml/src/llamafile/sgemm.cpp
  [17/33] /usr/bin/g++  -pthread -B /home/tobias/miniconda3/envs/RuinedFooocus/compiler_compat  -I/tmp/pip-install-x1jkmplm/llama-cpp-python_71ad5501b0814bc7ab361aa0f606d9c2/vendor/llama.cpp/common/. -I/tmp/pip-install-x1jkmplm/llama-cpp-python_71ad5501b0814bc7ab361aa0f606d9c2/vendor/llama.cpp/src/. -I/tmp/pip-install-x1jkmplm/llama-cpp-python_71ad5501b0814bc7ab361aa0f606d9c2/vendor/llama.cpp/src/../include -I/tmp/pip-install-x1jkmplm/llama-cpp-python_71ad5501b0814bc7ab361aa0f606d9c2/vendor/llama.cpp/ggml/src/../include -O3 -DNDEBUG -fPIC -MD -MT vendor/llama.cpp/common/CMakeFiles/common.dir/train.cpp.o -MF vendor/llama.cpp/common/CMakeFiles/common.dir/train.cpp.o.d -o vendor/llama.cpp/common/CMakeFiles/common.dir/train.cpp.o -c /tmp/pip-install-x1jkmplm/llama-cpp-python_71ad5501b0814bc7ab361aa0f606d9c2/vendor/llama.cpp/common/train.cpp
  [18/33] /usr/bin/g++  -pthread -B /home/tobias/miniconda3/envs/RuinedFooocus/compiler_compat -DLLAMA_BUILD -DLLAMA_SHARED -Dllama_EXPORTS -I/tmp/pip-install-x1jkmplm/llama-cpp-python_71ad5501b0814bc7ab361aa0f606d9c2/vendor/llama.cpp/src/. -I/tmp/pip-install-x1jkmplm/llama-cpp-python_71ad5501b0814bc7ab361aa0f606d9c2/vendor/llama.cpp/src/../include -I/tmp/pip-install-x1jkmplm/llama-cpp-python_71ad5501b0814bc7ab361aa0f606d9c2/vendor/llama.cpp/ggml/src/../include -O3 -DNDEBUG -fPIC -MD -MT vendor/llama.cpp/src/CMakeFiles/llama.dir/llama-vocab.cpp.o -MF vendor/llama.cpp/src/CMakeFiles/llama.dir/llama-vocab.cpp.o.d -o vendor/llama.cpp/src/CMakeFiles/llama.dir/llama-vocab.cpp.o -c /tmp/pip-install-x1jkmplm/llama-cpp-python_71ad5501b0814bc7ab361aa0f606d9c2/vendor/llama.cpp/src/llama-vocab.cpp
  [19/33] /usr/bin/gcc  -pthread -B /home/tobias/miniconda3/envs/RuinedFooocus/compiler_compat -DGGML_BUILD -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_USE_LLAMAFILE -DGGML_USE_OPENMP -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_EXPORTS -I/tmp/pip-install-x1jkmplm/llama-cpp-python_71ad5501b0814bc7ab361aa0f606d9c2/vendor/llama.cpp/ggml/src/../include -I/tmp/pip-install-x1jkmplm/llama-cpp-python_71ad5501b0814bc7ab361aa0f606d9c2/vendor/llama.cpp/ggml/src/. -O3 -DNDEBUG -std=gnu11 -fPIC -Wshadow -Wstrict-prototypes -Wpointer-arith -Wmissing-prototypes -Werror=implicit-int -Werror=implicit-function-declaration -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wdouble-promotion -march=native -fopenmp -MD -MT vendor/llama.cpp/ggml/src/CMakeFiles/ggml.dir/ggml-quants.c.o -MF vendor/llama.cpp/ggml/src/CMakeFiles/ggml.dir/ggml-quants.c.o.d -o vendor/llama.cpp/ggml/src/CMakeFiles/ggml.dir/ggml-quants.c.o -c /tmp/pip-install-x1jkmplm/llama-cpp-python_71ad5501b0814bc7ab361aa0f606d9c2/vendor/llama.cpp/ggml/src/ggml-quants.c
  [20/33] /usr/bin/g++  -pthread -B /home/tobias/miniconda3/envs/RuinedFooocus/compiler_compat  -I/tmp/pip-install-x1jkmplm/llama-cpp-python_71ad5501b0814bc7ab361aa0f606d9c2/vendor/llama.cpp/common/. -I/tmp/pip-install-x1jkmplm/llama-cpp-python_71ad5501b0814bc7ab361aa0f606d9c2/vendor/llama.cpp/src/. -I/tmp/pip-install-x1jkmplm/llama-cpp-python_71ad5501b0814bc7ab361aa0f606d9c2/vendor/llama.cpp/src/../include -I/tmp/pip-install-x1jkmplm/llama-cpp-python_71ad5501b0814bc7ab361aa0f606d9c2/vendor/llama.cpp/ggml/src/../include -O3 -DNDEBUG -fPIC -MD -MT vendor/llama.cpp/common/CMakeFiles/common.dir/common.cpp.o -MF vendor/llama.cpp/common/CMakeFiles/common.dir/common.cpp.o.d -o vendor/llama.cpp/common/CMakeFiles/common.dir/common.cpp.o -c /tmp/pip-install-x1jkmplm/llama-cpp-python_71ad5501b0814bc7ab361aa0f606d9c2/vendor/llama.cpp/common/common.cpp
  [21/33] /usr/bin/g++  -pthread -B /home/tobias/miniconda3/envs/RuinedFooocus/compiler_compat -DLLAMA_BUILD -DLLAMA_SHARED -Dllama_EXPORTS -I/tmp/pip-install-x1jkmplm/llama-cpp-python_71ad5501b0814bc7ab361aa0f606d9c2/vendor/llama.cpp/src/. -I/tmp/pip-install-x1jkmplm/llama-cpp-python_71ad5501b0814bc7ab361aa0f606d9c2/vendor/llama.cpp/src/../include -I/tmp/pip-install-x1jkmplm/llama-cpp-python_71ad5501b0814bc7ab361aa0f606d9c2/vendor/llama.cpp/ggml/src/../include -O3 -DNDEBUG -fPIC -MD -MT vendor/llama.cpp/src/CMakeFiles/llama.dir/unicode.cpp.o -MF vendor/llama.cpp/src/CMakeFiles/llama.dir/unicode.cpp.o.d -o vendor/llama.cpp/src/CMakeFiles/llama.dir/unicode.cpp.o -c /tmp/pip-install-x1jkmplm/llama-cpp-python_71ad5501b0814bc7ab361aa0f606d9c2/vendor/llama.cpp/src/unicode.cpp
  [22/33] /usr/bin/gcc  -pthread -B /home/tobias/miniconda3/envs/RuinedFooocus/compiler_compat -DGGML_BUILD -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_USE_LLAMAFILE -DGGML_USE_OPENMP -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_EXPORTS -I/tmp/pip-install-x1jkmplm/llama-cpp-python_71ad5501b0814bc7ab361aa0f606d9c2/vendor/llama.cpp/ggml/src/../include -I/tmp/pip-install-x1jkmplm/llama-cpp-python_71ad5501b0814bc7ab361aa0f606d9c2/vendor/llama.cpp/ggml/src/. -O3 -DNDEBUG -std=gnu11 -fPIC -Wshadow -Wstrict-prototypes -Wpointer-arith -Wmissing-prototypes -Werror=implicit-int -Werror=implicit-function-declaration -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wdouble-promotion -march=native -fopenmp -MD -MT vendor/llama.cpp/ggml/src/CMakeFiles/ggml.dir/ggml.c.o -MF vendor/llama.cpp/ggml/src/CMakeFiles/ggml.dir/ggml.c.o.d -o vendor/llama.cpp/ggml/src/CMakeFiles/ggml.dir/ggml.c.o -c /tmp/pip-install-x1jkmplm/llama-cpp-python_71ad5501b0814bc7ab361aa0f606d9c2/vendor/llama.cpp/ggml/src/ggml.c
  [23/33] : && /usr/bin/g++  -pthread -B /home/tobias/miniconda3/envs/RuinedFooocus/compiler_compat -fPIC -O3 -DNDEBUG   -shared -Wl,-soname,libggml.so -o vendor/llama.cpp/ggml/src/libggml.so vendor/llama.cpp/ggml/src/CMakeFiles/ggml.dir/ggml.c.o vendor/llama.cpp/ggml/src/CMakeFiles/ggml.dir/ggml-alloc.c.o vendor/llama.cpp/ggml/src/CMakeFiles/ggml.dir/ggml-backend.c.o vendor/llama.cpp/ggml/src/CMakeFiles/ggml.dir/ggml-quants.c.o vendor/llama.cpp/ggml/src/CMakeFiles/ggml.dir/llamafile/sgemm.cpp.o vendor/llama.cpp/ggml/src/CMakeFiles/ggml.dir/ggml-aarch64.c.o  -Wl,-rpath,"\$ORIGIN"  -lm  /usr/lib/libgomp.so && :
  [24/33] /usr/bin/g++  -pthread -B /home/tobias/miniconda3/envs/RuinedFooocus/compiler_compat  -I/tmp/pip-install-x1jkmplm/llama-cpp-python_71ad5501b0814bc7ab361aa0f606d9c2/vendor/llama.cpp/common/. -I/tmp/pip-install-x1jkmplm/llama-cpp-python_71ad5501b0814bc7ab361aa0f606d9c2/vendor/llama.cpp/src/. -I/tmp/pip-install-x1jkmplm/llama-cpp-python_71ad5501b0814bc7ab361aa0f606d9c2/vendor/llama.cpp/src/../include -I/tmp/pip-install-x1jkmplm/llama-cpp-python_71ad5501b0814bc7ab361aa0f606d9c2/vendor/llama.cpp/ggml/src/../include -O3 -DNDEBUG -fPIC -MD -MT vendor/llama.cpp/common/CMakeFiles/common.dir/json-schema-to-grammar.cpp.o -MF vendor/llama.cpp/common/CMakeFiles/common.dir/json-schema-to-grammar.cpp.o.d -o vendor/llama.cpp/common/CMakeFiles/common.dir/json-schema-to-grammar.cpp.o -c /tmp/pip-install-x1jkmplm/llama-cpp-python_71ad5501b0814bc7ab361aa0f606d9c2/vendor/llama.cpp/common/json-schema-to-grammar.cpp
  [25/33] /usr/bin/g++  -pthread -B /home/tobias/miniconda3/envs/RuinedFooocus/compiler_compat -DLLAMA_BUILD -DLLAMA_SHARED -I/tmp/pip-install-x1jkmplm/llama-cpp-python_71ad5501b0814bc7ab361aa0f606d9c2/vendor/llama.cpp/examples/llava/. -I/tmp/pip-install-x1jkmplm/llama-cpp-python_71ad5501b0814bc7ab361aa0f606d9c2/vendor/llama.cpp/examples/llava/../.. -I/tmp/pip-install-x1jkmplm/llama-cpp-python_71ad5501b0814bc7ab361aa0f606d9c2/vendor/llama.cpp/examples/llava/../../common -I/tmp/pip-install-x1jkmplm/llama-cpp-python_71ad5501b0814bc7ab361aa0f606d9c2/vendor/llama.cpp/ggml/src/../include -I/tmp/pip-install-x1jkmplm/llama-cpp-python_71ad5501b0814bc7ab361aa0f606d9c2/vendor/llama.cpp/src/. -I/tmp/pip-install-x1jkmplm/llama-cpp-python_71ad5501b0814bc7ab361aa0f606d9c2/vendor/llama.cpp/src/../include -O3 -DNDEBUG -fPIC -Wno-cast-qual -MD -MT vendor/llama.cpp/examples/llava/CMakeFiles/llava.dir/clip.cpp.o -MF vendor/llama.cpp/examples/llava/CMakeFiles/llava.dir/clip.cpp.o.d -o vendor/llama.cpp/examples/llava/CMakeFiles/llava.dir/clip.cpp.o -c /tmp/pip-install-x1jkmplm/llama-cpp-python_71ad5501b0814bc7ab361aa0f606d9c2/vendor/llama.cpp/examples/llava/clip.cpp
  [26/33] : && /usr/bin/cmake -E rm -f vendor/llama.cpp/examples/llava/libllava_static.a && /usr/bin/ar qc vendor/llama.cpp/examples/llava/libllava_static.a  vendor/llama.cpp/examples/llava/CMakeFiles/llava.dir/llava.cpp.o vendor/llama.cpp/examples/llava/CMakeFiles/llava.dir/clip.cpp.o && /usr/bin/ranlib vendor/llama.cpp/examples/llava/libllava_static.a && :
  [27/33] /usr/bin/g++  -pthread -B /home/tobias/miniconda3/envs/RuinedFooocus/compiler_compat  -I/tmp/pip-install-x1jkmplm/llama-cpp-python_71ad5501b0814bc7ab361aa0f606d9c2/vendor/llama.cpp/common/. -I/tmp/pip-install-x1jkmplm/llama-cpp-python_71ad5501b0814bc7ab361aa0f606d9c2/vendor/llama.cpp/src/. -I/tmp/pip-install-x1jkmplm/llama-cpp-python_71ad5501b0814bc7ab361aa0f606d9c2/vendor/llama.cpp/src/../include -I/tmp/pip-install-x1jkmplm/llama-cpp-python_71ad5501b0814bc7ab361aa0f606d9c2/vendor/llama.cpp/ggml/src/../include -O3 -DNDEBUG -fPIC -MD -MT vendor/llama.cpp/common/CMakeFiles/common.dir/arg.cpp.o -MF vendor/llama.cpp/common/CMakeFiles/common.dir/arg.cpp.o.d -o vendor/llama.cpp/common/CMakeFiles/common.dir/arg.cpp.o -c /tmp/pip-install-x1jkmplm/llama-cpp-python_71ad5501b0814bc7ab361aa0f606d9c2/vendor/llama.cpp/common/arg.cpp
  [28/33] /usr/bin/g++  -pthread -B /home/tobias/miniconda3/envs/RuinedFooocus/compiler_compat -DLLAMA_BUILD -DLLAMA_SHARED -Dllama_EXPORTS -I/tmp/pip-install-x1jkmplm/llama-cpp-python_71ad5501b0814bc7ab361aa0f606d9c2/vendor/llama.cpp/src/. -I/tmp/pip-install-x1jkmplm/llama-cpp-python_71ad5501b0814bc7ab361aa0f606d9c2/vendor/llama.cpp/src/../include -I/tmp/pip-install-x1jkmplm/llama-cpp-python_71ad5501b0814bc7ab361aa0f606d9c2/vendor/llama.cpp/ggml/src/../include -O3 -DNDEBUG -fPIC -MD -MT vendor/llama.cpp/src/CMakeFiles/llama.dir/llama.cpp.o -MF vendor/llama.cpp/src/CMakeFiles/llama.dir/llama.cpp.o.d -o vendor/llama.cpp/src/CMakeFiles/llama.dir/llama.cpp.o -c /tmp/pip-install-x1jkmplm/llama-cpp-python_71ad5501b0814bc7ab361aa0f606d9c2/vendor/llama.cpp/src/llama.cpp
  [29/33] : && /usr/bin/g++  -pthread -B /home/tobias/miniconda3/envs/RuinedFooocus/compiler_compat -fPIC -O3 -DNDEBUG   -shared -Wl,-soname,libllama.so -o vendor/llama.cpp/src/libllama.so vendor/llama.cpp/src/CMakeFiles/llama.dir/llama.cpp.o vendor/llama.cpp/src/CMakeFiles/llama.dir/llama-vocab.cpp.o vendor/llama.cpp/src/CMakeFiles/llama.dir/llama-grammar.cpp.o vendor/llama.cpp/src/CMakeFiles/llama.dir/llama-sampling.cpp.o vendor/llama.cpp/src/CMakeFiles/llama.dir/unicode.cpp.o vendor/llama.cpp/src/CMakeFiles/llama.dir/unicode-data.cpp.o  -Wl,-rpath,"\$ORIGIN"  vendor/llama.cpp/ggml/src/libggml.so && :
  [30/33] : && /usr/bin/cmake -E rm -f vendor/llama.cpp/common/libcommon.a && /usr/bin/ar qc vendor/llama.cpp/common/libcommon.a  vendor/llama.cpp/common/CMakeFiles/build_info.dir/build-info.cpp.o vendor/llama.cpp/common/CMakeFiles/common.dir/arg.cpp.o vendor/llama.cpp/common/CMakeFiles/common.dir/common.cpp.o vendor/llama.cpp/common/CMakeFiles/common.dir/console.cpp.o vendor/llama.cpp/common/CMakeFiles/common.dir/json-schema-to-grammar.cpp.o vendor/llama.cpp/common/CMakeFiles/common.dir/log.cpp.o vendor/llama.cpp/common/CMakeFiles/common.dir/ngram-cache.cpp.o vendor/llama.cpp/common/CMakeFiles/common.dir/sampling.cpp.o vendor/llama.cpp/common/CMakeFiles/common.dir/train.cpp.o && /usr/bin/ranlib vendor/llama.cpp/common/libcommon.a && :
  [31/33] : && /usr/bin/g++  -pthread -B /home/tobias/miniconda3/envs/RuinedFooocus/compiler_compat -fPIC -O3 -DNDEBUG   -shared -Wl,-soname,libllava.so -o vendor/llama.cpp/examples/llava/libllava.so vendor/llama.cpp/examples/llava/CMakeFiles/llava.dir/llava.cpp.o vendor/llama.cpp/examples/llava/CMakeFiles/llava.dir/clip.cpp.o  -Wl,-rpath,"\$ORIGIN"  vendor/llama.cpp/src/libllama.so  vendor/llama.cpp/ggml/src/libggml.so && :
  [32/33] : && /usr/bin/g++  -pthread -B /home/tobias/miniconda3/envs/RuinedFooocus/compiler_compat -O3 -DNDEBUG  vendor/llama.cpp/examples/llava/CMakeFiles/llava.dir/llava.cpp.o vendor/llama.cpp/examples/llava/CMakeFiles/llava.dir/clip.cpp.o vendor/llama.cpp/examples/llava/CMakeFiles/llama-llava-cli.dir/llava-cli.cpp.o -o vendor/llama.cpp/examples/llava/llama-llava-cli  -Wl,-rpath,/tmp/tmpg9k8bu4_/build/vendor/llama.cpp/src:/tmp/tmpg9k8bu4_/build/vendor/llama.cpp/ggml/src:  vendor/llama.cpp/common/libcommon.a  vendor/llama.cpp/src/libllama.so  vendor/llama.cpp/ggml/src/libggml.so && :
  FAILED: vendor/llama.cpp/examples/llava/llama-llava-cli
  : && /usr/bin/g++  -pthread -B /home/tobias/miniconda3/envs/RuinedFooocus/compiler_compat -O3 -DNDEBUG  vendor/llama.cpp/examples/llava/CMakeFiles/llava.dir/llava.cpp.o vendor/llama.cpp/examples/llava/CMakeFiles/llava.dir/clip.cpp.o vendor/llama.cpp/examples/llava/CMakeFiles/llama-llava-cli.dir/llava-cli.cpp.o -o vendor/llama.cpp/examples/llava/llama-llava-cli  -Wl,-rpath,/tmp/tmpg9k8bu4_/build/vendor/llama.cpp/src:/tmp/tmpg9k8bu4_/build/vendor/llama.cpp/ggml/src:  vendor/llama.cpp/common/libcommon.a  vendor/llama.cpp/src/libllama.so  vendor/llama.cpp/ggml/src/libggml.so && :
  /home/tobias/miniconda3/envs/RuinedFooocus/compiler_compat/ld: warning: libgomp.so.1, needed by vendor/llama.cpp/ggml/src/libggml.so, not found (try using -rpath or -rpath-link)
  /home/tobias/miniconda3/envs/RuinedFooocus/compiler_compat/ld: vendor/llama.cpp/ggml/src/libggml.so: undefined reference to `GOMP_barrier@GOMP_1.0'
  /home/tobias/miniconda3/envs/RuinedFooocus/compiler_compat/ld: vendor/llama.cpp/ggml/src/libggml.so: undefined reference to `GOMP_parallel@GOMP_4.0'
  /home/tobias/miniconda3/envs/RuinedFooocus/compiler_compat/ld: vendor/llama.cpp/ggml/src/libggml.so: undefined reference to `omp_get_thread_num@OMP_1.0'
  /home/tobias/miniconda3/envs/RuinedFooocus/compiler_compat/ld: vendor/llama.cpp/ggml/src/libggml.so: undefined reference to `GOMP_single_start@GOMP_1.0'
  /home/tobias/miniconda3/envs/RuinedFooocus/compiler_compat/ld: vendor/llama.cpp/ggml/src/libggml.so: undefined reference to `omp_get_num_threads@OMP_1.0'
  collect2: Fehler: ld gab 1 als Ende-Status zurück
  [33/33] : && /usr/bin/g++  -pthread -B /home/tobias/miniconda3/envs/RuinedFooocus/compiler_compat -O3 -DNDEBUG  vendor/llama.cpp/examples/llava/CMakeFiles/llava.dir/llava.cpp.o vendor/llama.cpp/examples/llava/CMakeFiles/llava.dir/clip.cpp.o vendor/llama.cpp/examples/llava/CMakeFiles/llama-minicpmv-cli.dir/minicpmv-cli.cpp.o -o vendor/llama.cpp/examples/llava/llama-minicpmv-cli  -Wl,-rpath,/tmp/tmpg9k8bu4_/build/vendor/llama.cpp/src:/tmp/tmpg9k8bu4_/build/vendor/llama.cpp/ggml/src:  vendor/llama.cpp/common/libcommon.a  vendor/llama.cpp/src/libllama.so  vendor/llama.cpp/ggml/src/libggml.so && :
  FAILED: vendor/llama.cpp/examples/llava/llama-minicpmv-cli
  : && /usr/bin/g++  -pthread -B /home/tobias/miniconda3/envs/RuinedFooocus/compiler_compat -O3 -DNDEBUG  vendor/llama.cpp/examples/llava/CMakeFiles/llava.dir/llava.cpp.o vendor/llama.cpp/examples/llava/CMakeFiles/llava.dir/clip.cpp.o vendor/llama.cpp/examples/llava/CMakeFiles/llama-minicpmv-cli.dir/minicpmv-cli.cpp.o -o vendor/llama.cpp/examples/llava/llama-minicpmv-cli  -Wl,-rpath,/tmp/tmpg9k8bu4_/build/vendor/llama.cpp/src:/tmp/tmpg9k8bu4_/build/vendor/llama.cpp/ggml/src:  vendor/llama.cpp/common/libcommon.a  vendor/llama.cpp/src/libllama.so  vendor/llama.cpp/ggml/src/libggml.so && :
  /home/tobias/miniconda3/envs/RuinedFooocus/compiler_compat/ld: warning: libgomp.so.1, needed by vendor/llama.cpp/ggml/src/libggml.so, not found (try using -rpath or -rpath-link)
  /home/tobias/miniconda3/envs/RuinedFooocus/compiler_compat/ld: vendor/llama.cpp/ggml/src/libggml.so: undefined reference to `GOMP_barrier@GOMP_1.0'
  /home/tobias/miniconda3/envs/RuinedFooocus/compiler_compat/ld: vendor/llama.cpp/ggml/src/libggml.so: undefined reference to `GOMP_parallel@GOMP_4.0'
  /home/tobias/miniconda3/envs/RuinedFooocus/compiler_compat/ld: vendor/llama.cpp/ggml/src/libggml.so: undefined reference to `omp_get_thread_num@OMP_1.0'
  /home/tobias/miniconda3/envs/RuinedFooocus/compiler_compat/ld: vendor/llama.cpp/ggml/src/libggml.so: undefined reference to `GOMP_single_start@GOMP_1.0'
  /home/tobias/miniconda3/envs/RuinedFooocus/compiler_compat/ld: vendor/llama.cpp/ggml/src/libggml.so: undefined reference to `omp_get_num_threads@OMP_1.0'
  collect2: Fehler: ld gab 1 als Ende-Status zurück
  ninja: build stopped: subcommand failed.
  
  
  *** CMake build failed
  [end of output]

note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for llama_cpp_python
ERROR: Failed to build installable wheels for some pyproject.toml based projects (llama_cpp_python)
`

@yownas
Copy link
Collaborator

yownas commented Jan 27, 2025

I wonder if it is miniconda that has problems with RF's requirements. Can you try with virtualenv or pyenv and some version of python 3.10?

@iwr-redmond
Copy link

iwr-redmond commented Jan 28, 2025

Because llama-cpp-python doesn't publish wheels to PyPI, all sorts of strange errors crop up.

At the moment, pip_versions says:

llama_cpp_python==0.3.0; platform_system == "Linux"
https://github.com/abetlen/llama-cpp-python/releases/download/v0.3.0/llama_cpp_python-0.3.0-cp310-cp310-win_amd64.whl; platform_system == "Windows"

You may wish to consider changing this to:

# top of file
--extra-index-url https://abetlen.github.io/llama-cpp-python/whl/cpu

# bottom of file
llama-cpp-python==0.3.2

References:
https://pip.pypa.io/en/stable/reference/requirements-file-format/
https://llama-cpp-python.readthedocs.io/en/latest/

@xxl2005
Copy link
Author

xxl2005 commented Jan 28, 2025

I use miniconda with python 3.10. operating system I use Linux

@yownas
Copy link
Collaborator

yownas commented Jan 30, 2025

Because llama-cpp-python doesn't publish wheels to PyPI, all sorts of strange errors crop up.

At the moment, pip_versions says:

llama_cpp_python==0.3.0; platform_system == "Linux"
https://github.com/abetlen/llama-cpp-python/releases/download/v0.3.0/llama_cpp_python-0.3.0-cp310-cp310-win_amd64.whl; platform_system == "Windows"

You may wish to consider changing this to:

# top of file
--extra-index-url https://abetlen.github.io/llama-cpp-python/whl/cpu

# bottom of file
llama-cpp-python==0.3.2

References: https://pip.pypa.io/en/stable/reference/requirements-file-format/ https://llama-cpp-python.readthedocs.io/en/latest/

Thanks for the tip. Unfortunately they were built with musl (?) and didn't work out of the box in Ubuntu. But it led me down a rabbithole and now I use the ones built by the oobabooga project. I hope they work well for everyone.

@xxl2005 : Pull the latest main branch if it doesn't update itself. I should hopefully use llama-cpp-python modules that are prebuilt and work for everyone.

@iwr-redmond
Copy link

You may wish to keep an eye on this thread as @oobabooga is considering switching to the new easy-llama package by @ddh0.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants