Video demo: youtube
French version: click
A server to run and interact with LLM models optimized for Rockchip RK3588(S) and RK3576 platforms. The difference from other software of this type like Ollama or Llama.cpp is that RKLLama allows models to run on the NPU.
- Version
Lib rkllm-runtime
: V1.1.4. - Tested on an
Orange Pi 5 Pro (16GB RAM)
.
./models
: contains your rkllm models../lib
: C++rkllm
library used for inference andfix_freqence_platform
../app.py
: API Rest server../client.py
: Client to interact with the server.
- Python 3.8 to 3.12
- Hardware: Orange Pi 5 Pro: (Rockchip RK3588S, NPU 6 TOPS).
- OS: Ubuntu 24.04 arm64.
- Running models on NPU.
- Pull models directly from Huggingface
- include a API REST with documentation
- Listing available models.
- Dynamic loading and unloading of models.
- Inference requests.
- Streaming and non-streaming modes.
- Message history.
- Client : Installation guide.
- API REST : English documentation
- API REST : French documentation
- Download RKLLama:
git clone https://github.com/notpunchnox/rkllama
cd rkllama
- Install RKLLama
chmod +x setup.sh
sudo ./setup.sh
Virtualization with conda
is started automatically, as well as the NPU frequency setting.
- Start the server
rkllama serve
- Command to start the client
rkllama
or
rkllama help
- See the available models
rkllama list
- Run a model
rkllama run <model_name>
Then start chatting ( verbose mode: display formatted history and statistics )
You can download and install a model from the Hugging Face platform with the following command:
rkllama pull username/repo_id/model_file.rkllm
Alternatively, you can run the command interactively:
rkllama pull
Repo ID ( example: punchnox/Tinnyllama-1.1B-rk3588-rkllm-1.1.4): <your response>
File ( example: TinyLlama-1.1B-Chat-v1.0-rk3588-w8a8-opt-0-hybrid-ratio-0.5.rkllm): <your response>
This will automatically download the specified model file and prepare it for use with RKLLAMA.
Example with Qwen2.5 3b from c01zaut: https://huggingface.co/c01zaut/Qwen2.5-3B-Instruct-RK3588-1.1.4
-
Download the Model
- Download
.rkllm
models directly from Hugging Face. - Alternatively, convert your GGUF models into
.rkllm
format (conversion tool coming soon on my GitHub).
- Download
-
Place the Model
- Navigate to the
~/RKLLAMA/models
directory on your system. - Place the
.rkllm
files in this directory.
Example directory structure:
~/RKLLAMA/models/ └── TinyLlama-1.1B-Chat-v1.0.rkllm
- Navigate to the
-
Go to the
~/RKLLAMA/
foldercd ~/RKLLAMA/ cp ./uninstall.sh ../ cd ../ && chmod +x ./uninstall.sh && ./uninstall.sh
-
If you don't have the
uninstall.sh
file:wget https://raw.githubusercontent.com/NotPunchnox/rkllama/refs/heads/main/uninstall.sh chmod +x ./uninstall.sh ./uninstall.sh
- Add multimodal models
- Add embedding models
- Add RKNN for onnx models ( TTS, image classification/segmentation... )
GGUF/HF to RKLLM
conversion software
System Monitor: