-
Notifications
You must be signed in to change notification settings - Fork 22
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Failed to build llama_cpp_python? #223
Comments
I wonder if it is miniconda that has problems with RF's requirements. Can you try with virtualenv or pyenv and some version of python 3.10? |
Because llama-cpp-python doesn't publish wheels to PyPI, all sorts of strange errors crop up. At the moment, pip_versions says:
You may wish to consider changing this to:
References: |
I use miniconda with python 3.10. operating system I use Linux |
Thanks for the tip. Unfortunately they were built with musl (?) and didn't work out of the box in Ubuntu. But it led me down a rabbithole and now I use the ones built by the oobabooga project. I hope they work well for everyone. @xxl2005 : Pull the latest main branch if it doesn't update itself. I should hopefully use llama-cpp-python modules that are prebuilt and work for everyone. |
You may wish to keep an eye on this thread as @oobabooga is considering switching to the new easy-llama package by @ddh0. |
hello
install problem?
`Building wheels for collected packages: llama_cpp_python
Building wheel for llama_cpp_python (pyproject.toml): started
Building wheel for llama_cpp_python (pyproject.toml): finished with status 'error'
Failed to build llama_cpp_python
stderr: error: subprocess-exited-with-error
× Building wheel for llama_cpp_python (pyproject.toml) did not run successfully.
│ exit code: 1
╰─> [114 lines of output]
*** scikit-build-core 0.10.7 using CMake 3.31.2 (wheel)
*** Configuring CMake...
loading initial cache file /tmp/tmpg9k8bu4_/build/CMakeInit.txt
-- The C compiler identification is GNU 14.2.1
-- The CXX compiler identification is GNU 14.2.1
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working C compiler: /usr/bin/gcc - skipped
-- Detecting C compile features
-- Detecting C compile features - done
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Check for working CXX compiler: /usr/bin/g++ - skipped
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Found Git: /usr/bin/git (found version "2.47.1")
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Success
-- Found Threads: TRUE
-- Found OpenMP_C: -fopenmp (found version "4.5")
-- Found OpenMP_CXX: -fopenmp (found version "4.5")
-- Found OpenMP: TRUE (found version "4.5")
-- OpenMP found
-- Using llamafile
-- Warning: ccache not found - consider installing it for faster compilation or disable this warning with GGML_CCACHE=OFF
-- CMAKE_SYSTEM_PROCESSOR: x86_64
-- x86 detected
CMake Warning (dev) at CMakeLists.txt:9 (install):
Target llama has PUBLIC_HEADER files but no PUBLIC_HEADER DESTINATION.
Call Stack (most recent call first):
CMakeLists.txt:73 (llama_cpp_python_install_target)
This warning is for project developers. Use -Wno-dev to suppress it.
note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for llama_cpp_python
ERROR: Failed to build installable wheels for some pyproject.toml based projects (llama_cpp_python)
`
The text was updated successfully, but these errors were encountered: