- 📋 Table of Contents
- 🚀 LLM Instruction tuning for school math questions
- 🗺️ Roadmap
- ⚖️ License
- 🔗 Links
- 📚 References & Citations
End-to-end MLOps LLM instruction finetuning based on PEFT & QLoRA to solve math problems.
Base LLM: OpenLLaMA
Dataset: Grade School Math Instructions Dataset
- NLP: PyTorch, Hugging Face Transformers, Accelerate, PEFT
- Research: Jupyter Lab, MLflow
- Framework: FastAPI
- Deployment: Docker, Amazon Web Services (AWS), GitHub Actions
- Version Control: Git, DVC, GitHub
Project structure template can be found here.
├── LICENSE
├── Makefile <- Makefile with commands like `make data` or `make train`
├── README.md <- The top-level README for developers using this project.
├── requirements.txt <- The requirements file for reproducing the analysis environment, e.g.
│ generated with `pip freeze > requirements.txt`
|
├── config <- Stores pipelines' configuration files
| ├── data-config.yaml
| ├── model-config.yaml
| └── model-parameters.yaml
|
├── data
│ ├── external <- Data from third party sources.
│ ├── interim <- Intermediate data that has been transformed.
│ ├── processed <- The final, canonical data sets for modeling.
│ └── raw <- The original, immutable data dump.
│
├── assets <- Store public assets for readme file
├── docs <- A default Sphinx project; see sphinx-doc.org for details
│
├── models <- Trained and serialized models, model predictions, or model summaries
│
├── notebooks <- Jupyter notebooks for research.
│
├── setup.py <- Make this project pip installable with `pip install -e`
├── src <- Source code for use in this project.
│ ├── __init__.py <- Makes src a Python module
│ │
| ├── logging <- Define loggers for the app
| ├── utils
| | ├── __init__.py
| | └── common.py <- Functions for common utilities
| |
│ ├── data <- Scripts to download or generate data
| | ├── components <- Classes for pipelines
| | ├── pipeline <- Scripts for data aggregation
| | ├── configuration.py <- Class to manage config files
| | ├── entity.py <- Stores configuration dataclasses
│ │ └── make_dataset.py <- Script to run data pipelines
│ │
│ └── models <- Scripts to train models and then use trained models to make
│ │ predictions
| ├── components <- Classes for pipelines
| ├── pipeline <- Scripts for data aggregation
| ├── configuration.py <- Class to manage config files
| ├── entity.py <- Stores configuration dataclasses
│ ├── predict_model.py <- Script to run prediction pipeline
│ └── train_model.py <- Script to run model pipelines
│
├── main.py <- Script to run model training pipeline
├── app.py <- Script to start FastApi app
|
├── .env.example <- example .env structure
├── Dockerfile <- configurates Docker container image
├── .github
| └── workflows
| └── main.yaml <- CI/CD config
|
├── .gitignore <- specify files to be ignored by git
├── .dvcignore <- specify files to be ignored by dvc
|
├── .dvc <- dvc config
├── dvc.lock <- store dvc tracked information
└── dvc.yaml <- specify pipeline version control
- Clone the project
git clone https://github.com/Logisx/LLMath-QLoRA
- Go to the project directory
cd my-project
- Install dependencies
pip install -r requirements.txt
- Start the app
python app.py
- Testing features: Develop unit tests and integrations test
- Hyperparameter tuning: Train a better model by hyperparameter tuning
- User interface: Create a frienly app interface
- Efficient Fine-Tuning with LoRA: A Guide to Optimal Parameter Selection for Large Language Models
- Grade School Math Instructions Fine-Tune OPT
@article{cobbe2021gsm8k,
title={Training Verifiers to Solve Math Word Problems},
author={Cobbe, Karl and Kosaraju, Vineet and Bavarian, Mohammad and Chen, Mark and Jun, Heewoo and Kaiser, Lukasz and Plappert, Matthias and Tworek, Jerry and Hilton, Jacob and Nakano, Reiichiro and Hesse, Christopher and Schulman, John},
journal={arXiv preprint arXiv:2110.14168},
year={2021}
}
@software{openlm2023openllama,
author = {Geng, Xinyang and Liu, Hao},
title = {OpenLLaMA: An Open Reproduction of LLaMA},
month = May,
year = 2023,
url = {https://github.com/openlm-research/open_llama}
}
@software{together2023redpajama,
author = {Together Computer},
title = {RedPajama-Data: An Open Source Recipe to Reproduce LLaMA training dataset},
month = April,
year = 2023,
url = {https://github.com/togethercomputer/RedPajama-Data}
}
@article{touvron2023llama,
title={Llama: Open and efficient foundation language models},
author={Touvron, Hugo and Lavril, Thibaut and Izacard, Gautier and Martinet, Xavier and Lachaux, Marie-Anne and Lacroix, Timoth{\'e}e and Rozi{\`e}re, Baptiste and Goyal, Naman and Hambro, Eric and Azhar, Faisal and others},
journal={arXiv preprint arXiv:2302.13971},
year={2023}
}