可配置的模块化RAG框架。
GoMate是一款配置化模块化的Retrieval-Augmented Generation (RAG) 框架,旨在提供可靠的输入与可信的输出 ,确保用户在检索问答场景中能够获得高质量且可信赖的结果。
GoMate框架的设计核心在于其高度的可配置性和模块化,使得用户可以根据具体需求灵活调整和优化各个组件,以满足各种应用场景的要求。
“Reliable input,Trusted output”
可靠的输入,可信的输出
- gomate打包构建,支持pip和source两种方式安装
- 添加MinerU文档解析
:一站式开源高质量数据提取工具,支持PDF/网页/多格式电子书提取
[20240907]
- RAPTOR:递归树检索器实现
- 支持多种文件解析并且模块化目前支持解析的文件类型包括:
text
,docx
,ppt
,excel
,html
,pdf
,md
等 - 优化了
DenseRetriever
,支持索引构建,增量追加以及索引保存,保存内容包括文档、向量以及索引 - 添加
ReRank
的BGE排序、Rewriter的HyDE
- 添加
Judge
的BgeJudge,判断文章是否有用20240711
- 创建conda环境(可选)
conda create -n gomate python=3.9
conda activate gomate
- 使用
pip
安装依赖
pip install gomate
- 下载源码
git clone https://github.com/gomate-community/GoMate.git
- 安装依赖
pip install -e .
├── applications
├── modules
| ├── citation:答案与证据引用
| ├── document:文档解析与切块,支持多种文档类型
| ├── generator:生成器
| ├── judger:文档选择
| ├── prompt:提示语
| ├── refiner:信息总结
| ├── reranker:排序模块
| ├── retrieval:检索模块
| └── rewriter:改写模块
import pickle
import pandas as pd
from tqdm import tqdm
from gomate.modules.document.chunk import TextChunker
from gomate.modules.document.txt_parser import TextParser
from gomate.modules.document.utils import PROJECT_BASE
from gomate.modules.generator.llm import GLM4Chat
from gomate.modules.reranker.bge_reranker import BgeRerankerConfig, BgeReranker
from gomate.modules.retrieval.bm25s_retriever import BM25RetrieverConfig
from gomate.modules.retrieval.dense_retriever import DenseRetrieverConfig
from gomate.modules.retrieval.hybrid_retriever import HybridRetriever, HybridRetrieverConfig
def generate_chunks():
tp = TextParser()# 代表txt格式解析
tc = TextChunker()
paragraphs = tp.parse(r'H:/2024-Xfyun-RAG/data/corpus.txt', encoding="utf-8")
print(len(paragraphs))
chunks = []
for content in tqdm(paragraphs):
chunk = tc.chunk_sentences([content], chunk_size=1024)
chunks.append(chunk)
with open(f'{PROJECT_BASE}/output/chunks.pkl', 'wb') as f:
pickle.dump(chunks, f)
corpus.txt每行为一段新闻,可以自行选取paragraph读取的逻辑,语料来自大模型RAG智能问答挑战赛
TextChunker
为文本块切块程序,主要特点使用InfiniFlow/huqie作为文本检索的分词器,适合RAG场景。
配置检索器:
下面是一个混合检索器HybridRetriever
配置参考,其中HybridRetrieverConfig
需要由BM25RetrieverConfig
和DenseRetrieverConfig
配置构成。
# BM25 and Dense Retriever configurations
bm25_config = BM25RetrieverConfig(
method='lucene',
index_path='indexs/description_bm25.index',
k1=1.6,
b=0.7
)
bm25_config.validate()
print(bm25_config.log_config())
dense_config = DenseRetrieverConfig(
model_name_or_path=embedding_model_path,
dim=1024,
index_path='indexs/dense_cache'
)
config_info = dense_config.log_config()
print(config_info)
# Hybrid Retriever configuration
# 由于分数框架不在同一维度,建议可以合并
hybrid_config = HybridRetrieverConfig(
bm25_config=bm25_config,
dense_config=dense_config,
bm25_weight=0.7, # bm25检索结果权重
dense_weight=0.3 # dense检索结果权重
)
hybrid_retriever = HybridRetriever(config=hybrid_config)
构建索引:
# 构建索引
hybrid_retriever.build_from_texts(corpus)
# 保存索引
hybrid_retriever.save_index()
如果构建好索引之后,可以多次使用,直接跳过上面步骤,加载索引
hybrid_retriever.load_index()
检索测试:
query = "支付宝"
results = hybrid_retriever.retrieve(query, top_k=10)
print(len(results))
# Output results
for result in results:
print(f"Text: {result['text']}, Score: {result['score']}")
reranker_config = BgeRerankerConfig(
model_name_or_path=reranker_model_path
)
bge_reranker = BgeReranker(reranker_config)
glm4_chat = GLM4Chat(llm_model_path)
# ====================检索问答=========================
test = pd.read_csv(test_path)
answers = []
for question in tqdm(test['question'], total=len(test)):
search_docs = hybrid_retriever.retrieve(question, top_k=10)
search_docs = bge_reranker.rerank(
query=question,
documents=[doc['text'] for idx, doc in enumerate(search_docs)]
)
# print(search_docs)
content = '\n'.join([f'信息[{idx}]:' + doc['text'] for idx, doc in enumerate(search_docs)])
answer = glm4_chat.chat(prompt=question, content=content)
answers.append(answer[0])
print(question)
print(answer[0])
print("************************************/n")
test['answer'] = answers
test[['answer']].to_csv(f'{PROJECT_BASE}/output/gomate_baseline.csv', index=False)
构建自定义的RAG应用
import os
from gomate.modules.document.common_parser import CommonParser
from gomate.modules.generator.llm import GLMChat
from gomate.modules.reranker.bge_reranker import BgeReranker
from gomate.modules.retrieval.dense_retriever import DenseRetriever
class RagApplication():
def __init__(self, config):
pass
def init_vector_store(self):
pass
def load_vector_store(self):
pass
def add_document(self, file_path):
pass
def chat(self, question: str = '', topk: int = 5):
pass
模块可见rag.py
可以配置本地模型路径
# 修改成自己的配置!!!
app_config = ApplicationConfig()
app_config.docs_path = "./docs/"
app_config.llm_model_path = "/data/users/searchgpt/pretrained_models/chatglm3-6b/"
retriever_config = DenseRetrieverConfig(
model_name_or_path="/data/users/searchgpt/pretrained_models/bge-large-zh-v1.5",
dim=1024,
index_dir='/data/users/searchgpt/yq/GoMate/examples/retrievers/dense_cache'
)
rerank_config = BgeRerankerConfig(
model_name_or_path="/data/users/searchgpt/pretrained_models/bge-reranker-large"
)
app_config.retriever_config = retriever_config
app_config.rerank_config = rerank_config
application = RagApplication(app_config)
application.init_vector_store()
python app.py
app后台日志:
本项目由网络数据科学与技术重点实验室GoMate
团队完成,团队指导老师为郭嘉丰、范意兴研究员。
欢迎多提建议、Bad cases,欢迎进群及时交流,也欢迎大家多提PR
群满或者合作交流可以联系: