⛓️ This project proposes to implement an WebUI integrated platform for latest LLMs, enable customized model and applications without coding!
Function Metrix shown as below:
For beginners please refer to the user manual.
The project has been tested under Python 3.8 - 3.10,CUDA 11.7/11.8, Windows、Linux system。
# Clone the repo
$ git clone https://github.com/wpydcr/LLM-Kit.git
# cd to the directory
$ cd LLM-Kit
# install requirements
$ pip install -r requirements.txt
Or simply download:Windows Env, Linux Env。
GPU Driver and cuda is also required and should be pre-installed.
-
For windows,double click web-demo-EN.bat to run.
-
For Linux,double click web-demo-EN.sh to run.
-
To try LLM with MySQL database connected, please download this.
-
To try LLM with cosplay, please download this.
-
To try response with memes, please download this.
- env :Integrated Environment
- utils :utils components
- modules :code of modules
- agent :agent relevant code
- chatdb : code for MySQL connection
- svc : SVC code
- vits : vits code
- apply :demo application code
- model :model code
- agent :agent relevant code
- data :general data
- apply :demo application data
- audio :generated audio
- emotion :memes used for LLM response
- play :cosplay character settings
- documents :Local knowledge vector store
- modeldata :model data for training
- LLM :LLM training data
- Embedding :Embedding model training data
- apply :demo application data
- ui :ui code
- models :Model file
- LLM :LLM model file
- Embedding :Embedding model file
- LoRA :LoRA model file
- live2d_models :live2d model file
- svc_models :svc models
- hubert_model :voice decoder file
- svc : svc model file
- vits_pretrained_models : vits model file
-
LLM API support(no GPU)
-
LLM support (train/inference)
- 4bit and 8bit(bitsandbytes only supports linux)
- deepspeed(On Windows currently only inference mode is supported.)
- chatglm-6b
- chatglm2-6b
- chatglm2-6b-32k
- moss-moon-003-sft
- phoenix-chat-7b
- Guanaco
- baichuan-vicuna-chinese-7b
- Baichuan-13B-Chat
- internlm-chat-7b-8k
- chinese-alpaca-2-7b(llama2)
- Qwen-7B-Chat
- Qwen-14B-Chat
- Baichuan2-7B-Chat
- Baichuan2-13B-Chat
-
multimodal large model(inference)
- qwen-vl
-
finetune Support
-
Embedding support trainning(train,inference:models that can be loaded by the HuggingFaceEmbeddings)
-
Tools
- Chat
- LLM API Parallel call
- LLM API Streaming model
- Prompt templates
- Image Generation
- Midjourney
- GEN-2
- Pika
- Dataset
- LLM Training dataset make
- Embedding Training dataset make
- LLM Dataset conversion
- Embedding Dataset conversion
- langchain
- Local Knowledge base
- FAISS
- Local knowledge base parallel call for local LLM
- Internet connection
- mysql database connection
- Agent implement
- Local Knowledge base
- Plugin model
- Chat
-
Application demo
-
Add API Support
- Deploy API by fastapi
- Implement WEB UI Demo by API call
- VUE Webui
Initiator, responsible for overall project architecture and technical planning
Responsible for python development of gradio, graph vector database, database, api interface integration, etc.
Responsible for python development for graph vector database, live2D, vits, svc, gradio, etc.
Responsible for back-end development of LLM training and inference
Responsible for prompt engineering, embedding model validation and inference back-end development.
Responsible for back-end development of embedded model training
For details, see user manual
The code in this repository is open source according to the AGPL-3.0 protocol.
On the one hand, we hope to strengthen the productisation of the project through open source; on the other hand, we hope to absorb more practice scenarios in the community to continue to improve the product, and we welcome everyone to participate in the project.
AGPL-3.0 is an OSI-approved licence, which meets all the standards of free and open source software. Open source will always be the core of our heart and soul, and we will always insist on doing this, and we believe that with the impetus of the community, we will definitely do a better job in this matter.
Many developers may have questions about this protocol, but the open source community has a lot of open source software that uses the AGPL-3.0 protocol, such as MongoDB, Grafana, Loki, and so on, and Wikipedia has a list of open source projects that use the AGPL-3.0 open source protocol.
AGPL-3.0 agreement has a very key point, that is, the modification of the upstream open source project code after the secondary distribution of the version must also be open source, the agreement restricts the part of the enterprise wants to Folk open source project code after the closed-source commercial distribution, and the upstream open source project maintenance team for direct commercial competition, if only the enterprise's own internal use without any level of modification, the user can not worry about the AGPL-3.0 agreement. Users do not have to worry about the restrictions imposed by the AGPL-3.0 agreement, which is designed to encourage and enable third parties who modify the software to contribute to the project and the community. We think this is a fairer way forward, and we believe it will help us build a stronger community.
Simply put: if you modify the source code of the project, you must contribute those modifications to the community, and you will not be allowed to distribute or sell the modified or derived code as closed-source commercial software.
We also offer a commercial licence, so if you need to develop, change or use this product in any way that is commercially viable, please contact us ([email protected]) for a commercial licence to comply with the AGPL-3.0 agreement for your use.
In addition, we also accept customised development according to individual or corporate needs.
Currently in China, the GPL agreement has the characteristics of a contract, which is a kind of civil legal act and falls within the scope of China's Contract Law. The project team reserves the right to litigation.
The project open source team has the final interpretation of this open source agreement.
Please cite the repo if you use the data or code in this repo.
@misc{wupingyu2023,
author={Pingyu Wu},
title = {LLM Kit},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/wpydcr/LLM-Kit.git}},
}