Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improved README instructions for building in Termux on Android devices #2840

Closed
wants to merge 18 commits into from
40 changes: 31 additions & 9 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -791,17 +791,41 @@ Finally, copy the `llama` binary and the model files to your device storage. Her

https://user-images.githubusercontent.com/271616/225014776-1d567049-ad71-4ef2-b050-55b0b3b9274c.mp4

#### Building the Project using Termux (F-Droid)
Termux from F-Droid offers an alternative route to execute the project on an Android device. This method empowers you to construct the project right from within the terminal, negating the requirement for a rooted device or SD Card.
#### Building the Project in Termux (F-Droid)
[Termux](https://termux.dev/) is a way to run `llama.cpp` on Android devices.

Outlined below are the directives for installing the project using OpenBLAS and CLBlast. This combination is specifically designed to deliver peak performance on recent devices that feature a GPU.
Ensure Termux is up to date and clone the repo:
```
apt update && apt upgrade
cd
git clone https://github.com/ggerganov/llama.cpp
```

Build `llama.cpp`:
```
cd llama.cpp
make
```

If you opt to utilize OpenBLAS, you'll need to install the corresponding package.
It's possible to include OpenBlas while building:
```
apt install libopenblas
pkg install libopenblas
cd llama.cpp
cmake -B build -DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS
cd build
cmake --build . --config Release
```

Subsequently, if you decide to incorporate CLBlast, you'll first need to install the requisite OpenCL packages:
Move your model to the home directory (`~/`), for example:
```
cd
cd storage/downloads
mv 7b-model.gguf ~/
```

Usage example:`./main -m ~/7b-model.gguf --color -c 2048 --keep -1 -n -2 -b 10 -i -ins`

Alternatively, to enable CLBlast then install the requisite OpenCL packages:
```
apt install ocl-icd opencl-headers opencl-clhpp clinfo
```
Expand Down Expand Up @@ -830,9 +854,7 @@ export LD_LIBRARY_PATH=/vendor/lib64:$LD_LIBRARY_PATH

(Note: some Android devices, like the Zenfone 8, need the following command instead - "export LD_LIBRARY_PATH=/system/vendor/lib64:$LD_LIBRARY_PATH". Source: https://www.reddit.com/r/termux/comments/kc3ynp/opencl_working_in_termux_more_in_comments/ )

For easy and swift re-execution, consider documenting this final part in a .sh script file. This will enable you to rerun the process with minimal hassle.

Place your desired model into the `~/llama.cpp/models/` directory and execute the `./main (...)` script.
For easy and swift re-execution, consider documenting this final part in a .sh script file. This will allow you to run `./main (...)` with minimal hassle.

### Docker

Expand Down