Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Enhancement: Haystack integration #68

Open
1 task done
MatthiasBergner opened this issue Aug 21, 2024 · 1 comment
Open
1 task done

Enhancement: Haystack integration #68

MatthiasBergner opened this issue Aug 21, 2024 · 1 comment
Labels
enhancement New feature or request

Comments

@MatthiasBergner
Copy link

What features would you like to see added?

Haystack is a great opportunity to use RAG. Haystack has an API and LibreChat is a great UI with all the good implementation of user and document management. Do you see an opportunity to access Haystack with LibreChat using the API?
https://haystack.deepset.ai

Haystack offers numerous features that can significantly enhance LibreChat, while LibreChat is the perfect complement for using Haystack with a user-friendly interface.

More details

Here is a small step by step guide to get the idea:
If ollama is not already installed and running, visit https://ollama.com
for linux installation:

curl -fsSL https://ollama.com/install.sh | sh

Get the Model used in the code (its a small one for testing purposes)

ollama pull mistral

After installing the necessary dependencies I have the following code provided to geht a small glimpse of what I mean.

conda create -n haystack python pip -y
conda activate haystack
pip install haystack-ai datasets ollama-haystack gradio

file: example.py

# Importing required libraries
from datasets import load_dataset
from haystack import Document
from haystack.document_stores.in_memory import InMemoryDocumentStore
from haystack.components.retrievers import InMemoryBM25Retriever
from haystack.components.builders import PromptBuilder
from haystack_integrations.components.generators.ollama import OllamaGenerator
from haystack import Pipeline
import gradio as gr

# Write documents to InMemoryDocumentStore
document_store = InMemoryDocumentStore()
document_store.write_documents([
    Document(content="My name is Jean and I live in Paris."), 
    Document(content="My name is Mark and I live in Berlin."), 
    Document(content="My name is Giorgio and I live in Rome.")
])

# Initialize retriever
retriever = InMemoryBM25Retriever(document_store)

# Define prompt template
template = """
Given the following information, answer the question.

Context:
{% for document in documents %}
    {{ document.content }}
{% endfor %}

Question: {{question}}
Answer:
"""

# Initialize prompt builder
prompt_builder = PromptBuilder(template=template)

# Initialize Ollama generator
generator = OllamaGenerator(
    model="mistral",
    url = "http://localhost:11434/api/generate",
    generation_kwargs={
        "num_predict": 100,
        "temperature": 0.9,
    }
)

# Create and configure pipeline
basic_rag_pipeline = Pipeline()
basic_rag_pipeline.add_component("retriever", retriever)
basic_rag_pipeline.add_component("prompt_builder", prompt_builder)
basic_rag_pipeline.add_component("llm", generator)
basic_rag_pipeline.connect("retriever", "prompt_builder.documents")
basic_rag_pipeline.connect("prompt_builder", "llm")

# Visualize pipeline (optional)
# basic_rag_pipeline.draw("basic-rag-pipeline.png")

# Define function to run pipeline with Gradio
def ask_question(question):
    response = basic_rag_pipeline.run(
        {
            "retriever": {"query": question}, 
            "prompt_builder": {"question": question}
        }
    )
    return response["llm"]["replies"][0]

# Create Gradio interface
gr_interface = gr.Interface(
    fn=ask_question,
    inputs=gr.components.Textbox(lines=2, placeholder="Enter your question here..."),
    outputs="text"
)

gr_interface.launch()

run the script

python example.py

access to the the gradio UI: http://127.0.0.1:7860
and ask a question about the given informations like:

My name is Jean, where do I live?

Instead of gradio it would be very nice using LibreChat as it offers so much more possibilities.

Thanks for reading all of this, you are very nice! And I send some nice greetings to all of you in the community!
Cheers! Matthias :)

Which components are impacted by your request?

Endpoints

Pictures

Bildschirmfoto 2024-08-21 um 11 14 50

Code of Conduct

  • I agree to follow this project's Code of Conduct
@MatthiasBergner MatthiasBergner added the enhancement New feature or request label Aug 21, 2024
@danny-avila danny-avila transferred this issue from danny-avila/LibreChat Aug 21, 2024
@danny-avila
Copy link
Owner

Transferred to appropriate repo

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants