Testing Llama 3.1 tools with llama.cpp and 4-bit Quantized 8B Instruct. This includes:
- Running Meta's example.
- Check if multiple-tools can be selected from single prompt.
- Check if LLM is able to use previous tool call response.
- Check complex JSON extract with tool call.
Before you begin, ensure you have met the following requirements:
- You have installed Python 3.9 or later.
- You have installed Poetry.
-
Clone the repository:
git clone https://github.com/AgiFlow/llama31 cd llama31
-
Install the dependencies:
poetry install
-
Activate the virtual environment:
poetry shell
-
Download models This repo uses lmstudio-community/Meta-Llama-3.1-8B-Instruct-GGUF from lmstudio-community for testing. You can also use other models of your choice.
-
Launch Jupyter Notebook:
poetry run jupyter notebook
To contribute to this project, please fork the repository and create a pull request. For major changes, please open an issue first to discuss what you would like to change.
This project uses the following license: MIT License.