Fine-tuning Llama3 8b to generate JSON formats for arithmetic questions and process the output to perform calculations.
-
Updated
May 28, 2024 - Python
Fine-tuning Llama3 8b to generate JSON formats for arithmetic questions and process the output to perform calculations.
Unify Efficient Fine-Tuning of 100+ LLMs
🐋MindChat(漫谈)——心理大模型:漫谈人生路, 笑对风霜途
META LLAMA3 GENAI Real World UseCases End To End Implementation Guide
End to End Generative AI Industry Projects on LLM Models with Deployment
Toolkit for fine-tuning, ablating and unit-testing open-source LLMs.
Firefly: 大模型训练工具,支持训练Yi1.5、Phi-3、Llama3、Gemma、MiniCPM、Yi、Deepseek、Orion、Xverse、Mixtral-8x7B、Zephyr、Mistral、Baichuan2、Llma2、Llama、Qwen、Baichuan、ChatGLM2、InternLM、Ziya2、Vicuna、Bloom等大模型
Finetune any model on HF in less than 30 seconds
This repo contains everything about transformers and NLP.
This project fine-tunes large language models (LLMs) for text-based recommendations, using a novel prompt mechanism to improve accuracy and user satisfaction. It demonstrates efficient model adaptation with diverse datasets, leveraging advanced libraries and techniques for optimal performance.
33B Chinese LLM, DPO QLORA, 100K context, AirLLM 70B inference with single 4GB GPU
Tuning the Finetuning: An exploration of achieving success with QLoRA
🐳 Aurora is a [Chinese Version] MoE model. Aurora is a further work based on Mixtral-8x7B, which activates the chat capability of the model's Chinese open domain.
qwen-1.5-1.8B sentiment analysis with prompt optimization and qlora fine-tuning
Kickstart with LLMs
Add a description, image, and links to the qlora topic page so that developers can more easily learn about it.
To associate your repository with the qlora topic, visit your repo's landing page and select "manage topics."