Skip to content

Commit

Permalink
llama cpp python no longer uses GPU abetlen/llama-cpp-python#912
Browse files Browse the repository at this point in the history
  • Loading branch information
pseudotensor committed Nov 15, 2023
1 parent a295ce8 commit 77a4457
Showing 1 changed file with 2 additions and 1 deletion.
3 changes: 2 additions & 1 deletion docs/README_LINUX.md
Original file line number Diff line number Diff line change
Expand Up @@ -175,7 +175,8 @@ These instructions are for Ubuntu x86_64 (other linux would be similar with diff
* GGUF ONLY for CUDA GPU (keeping CPU package in place to support CPU + GPU at same time):
```bash
pip uninstall -y llama-cpp-python-cuda
pip install https://github.com/jllllll/llama-cpp-python-cuBLAS-wheels/releases/download/textgen-webui/llama_cpp_python_cuda-0.2.18+cu118-cp310-cp310-manylinux_2_31_x86_64.whl
python -m pip install llama-cpp-python --prefer-binary --upgrade --extra-index-url=https://jllllll.github.io/llama-cpp-python-cuBLAS-wheels/AVX2/cu118
# pip install https://github.com/jllllll/llama-cpp-python-cuBLAS-wheels/releases/download/textgen-webui/llama_cpp_python_cuda-0.2.18+cu118-cp310-cp310-manylinux_2_31_x86_64.whl
```
* GGUF ONLY for CPU-AVX (can be used with -cuda one above)
```bash
Expand Down

0 comments on commit 77a4457

Please sign in to comment.