A Python SDK for interacting with LLM models on Raspberry Pi.
Download our pre-configured Raspberry Pi image with all dependencies: Download Hi SDK Image
This is a pre-built image with all dependencies installed on a Raspberry Pi OS 6.6.31+rpt-rpi-v8.
Prerequisites:
- Python 3.11.2 or higher
- Raspberry Pi OS
# Clone and install
git clone https://github.com/svenplb/hi-sdk.git
cd hi-sdk
pip install -e .
# Run setup (this will install Ollama, download models, and start services)
python3 -m hi setup --dev
After setup completes, you can start using the SDK:
hi chat # Start chatting
hi models # List available models
If you prefer to set up dependencies manually on your existing system:
- Install system dependencies:
sudo apt-get update
sudo apt-get install -y python3-pip git
- Install Ollama (LLM backend):
curl https://ollama.ai/install.sh | sh
ollama pull gemma:2b
- Clone and set up the SDK:
git clone https://github.com/svenplb/hi-sdk.git
cd hi-sdk
python3 -m venv venv
source venv/bin/activate
pip install -r requirements.txt
pip install -e .
- Run Ollama on new terminal:
ollama serve
- Start the API server on new terminal:
python3 -m uvicorn main:app --reload
Now you can use the SDK as shown in the Quick Start section.
from sdk.client import HiClient
client = HiClient()
client.load_model("qwen:1.8b", temperature=0.7)
response = client.chat("Tell me a joke")
print(response)
- Multiple model support (gemma2:2b, qwen:1.8b, qwen:0.5b)
- Streaming responses
- System prompts
- Performance metrics
- Conversation tracking
- CLI interface