Skip to content

Commit

Permalink
Implemented features
Browse files Browse the repository at this point in the history
-Dot now remembers the last loaded directory
-Dot now is not installed with a packaged version of mistral but rather installs it upon first opening the app
-Improved the pdf and file dispalying
  • Loading branch information
alexpinel committed Apr 13, 2024
1 parent a9efff7 commit 045a66a
Show file tree
Hide file tree
Showing 29 changed files with 6,368 additions and 29,028 deletions.
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -8,3 +8,4 @@ src/.env
dist
.DS_Store
mpnet/
baai/
124 changes: 61 additions & 63 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,63 +1,61 @@
# HELLO!

![ezgif-4-b96c0b5548](https://github.com/alexpinel/Dot/assets/93524949/e5983c61-d59c-45ac-86f6-9d62cffaf37b)

This is Dot, a standalone open source app meant for easy use of local LLMs and RAG in particular to interact with documents and files similarly to Nvidia's Chat with RTX. Dot itself is completely standalone and is packaged with all dependencies including a copy of Mistral 7B, this is to ensure the app is as accessible as possible and no prior knowledge of programming or local LLMs is required to use it. You can install the app (available for Apple Silicon and Windows) here: [Dot website ](https://dotapp.uk/)

### What does it do?

Dot can be used to load multiple documents into an llm and interact with them in a fully local environment through Retrieval Augmented Generation (RAG), supported documents are: pdf, docx, pptx, xlsx, and markdown. Apart from RAG, users can also switch to Big Dot for any interactions unrelated to their documents similarly to ChatGPT.


https://github.com/alexpinel/Dot/assets/93524949/807fb58c-40e0-407e-afb3-a3813477ce9e



### How does it work?

Dot is built with Electron JS, but its main functionalities come from a bundled install of Python that contains all libraries and necessary files. A multitude of libraries are used to make everything work, but perhaps the most important to be aware of are: llama.cpp to run the LLM, FAISS to create local vector stores, and Langchain & Huggingface to setup the conversation chains and embedding process.

### Install

You can either install the packaged app in the [Dot website ](https://dotapp.uk/) or can set up the project for development, to do so follow these steps:

- Clone the repository `$ https://github.com/alexpinel/Dot.git`
- Install Node js and then run `npm install` inside the project repository, you can run `npm install --force` if you face any issues at this stage

Now, it is time to add a full python bundle to the app. The purpose of this is to create a distributable environment with all necessary libraries, if you only plan on using Dot from the console you might not need to follow this particular step but then make sure to replace the python path locations specified in `src/index.js`. Creating the python bundle is covered in detail here: [https://til.simonwillison.net/electron/python-inside-electron](https://til.simonwillison.net/electron/python-inside-electron) , the bundles can also be installed from here: [https://github.com/indygreg/python-build-standalone/releases/tag/20240224](https://github.com/indygreg/python-build-standalone/releases/tag/20240224)

Having created the bundle, please rename it to 'python' and place it inside the `llm` directory. It is now time to get all necessary libraries, keep in mind that running a simple `pip install` will not work without specifying the actual path of the bundle so use this instead: `path/to/python/.bin/or/.exe -m pip install`

Required python libraries:
- pytorch [link](https://pytorch.org/get-started/locally/) (CPU version recommended as it is lighter than GPU)
- langchain [link](https://python.langchain.com/docs/get_started/quickstart)
- FAISS [link](https://python.langchain.com/docs/integrations/vectorstores/faiss)
- HuggingFace [link](https://python.langchain.com/docs/integrations/platforms/huggingface)
- llama.cpp [link](https://github.com/abetlen/llama-cpp-python) (Use CUDA implementation if you have an Nvidia GPU!)
- pypdf [link](https://python.langchain.com/docs/modules/data_connection/document_loaders/pdf)
- docx2txt [link](https://python.langchain.com/docs/integrations/document_loaders/microsoft_word)
- Unstructured [link](https://github.com/Unstructured-IO/unstructured) (Use `pip install "unstructured[pptx, md, xlsx]` for the file formats)

Now python should be setup and running! However, there is still a few more steps left, now is the time to add the final magic to Dot! First, create a folder inside the `llm` directory and name it `mpnet`, there you will need to install sentence-transformers to use for the document embeddings, fetch all the files from the following link and place them inside the new folder: [sentence-transformers/all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2/tree/main)

Finally, download the Mistral 7B LLM from the following link and place it inside the `llm/scripts` directory alongside the python scripts used by Dot: [TheBloke/Mistral-7B-Instruct-v0.2-GGUF](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.2-GGUF/blob/main/mistral-7b-instruct-v0.2.Q4_K_M.gguf)

That's it! If you follow these steps you should be able to get it all running, please let me know if you are facing any issues :)

### Future features I'd like to add:

- Linux support
- Ability to choose LLM
- Image support would be cool
- Increased awarnes of documents apart from merely their content
- Loading individual files instead of selecting a folder (This is really needed, some users get confused by this and I cannot blame them at all)
- Increased security considerations, after all this is the whole point of using a local LLM
- Support for more docs
- Storing file databases, allowing users to quickly switch between groups of files without having to load them all again
- idk, will find out along the way

# Want to help?

Please do! I am a busy student working on this as a side project so help is more than welcome!


# HELLO!

![ezgif-4-b96c0b5548](https://github.com/alexpinel/Dot/assets/93524949/e5983c61-d59c-45ac-86f6-9d62cffaf37b)

This is Dot, a standalone open source app meant for easy use of local LLMs and RAG in particular to interact with documents and files similarly to Nvidia's Chat with RTX. Dot itself is completely standalone and is packaged with all dependencies including a copy of Mistral 7B, this is to ensure the app is as accessible as possible and no prior knowledge of programming or local LLMs is required to use it. You can install the app (available for Apple Silicon and Windows) here: [Dot website ](https://dotapp.uk/)

### What does it do?

Dot can be used to load multiple documents into an llm and interact with them in a fully local environment through Retrieval Augmented Generation (RAG), supported documents are: pdf, docx, pptx, xlsx, and markdown. Apart from RAG, users can also switch to Big Dot for any interactions unrelated to their documents similarly to ChatGPT.


https://github.com/alexpinel/Dot/assets/93524949/807fb58c-40e0-407e-afb3-a3813477ce9e



### How does it work?

Dot is built with Electron JS, but its main functionalities come from a bundled install of Python that contains all libraries and necessary files. A multitude of libraries are used to make everything work, but perhaps the most important to be aware of are: llama.cpp to run the LLM, FAISS to create local vector stores, and Langchain & Huggingface to setup the conversation chains and embedding process.

### Install

You can either install the packaged app in the [Dot website ](https://dotapp.uk/) or can set up the project for development, to do so follow these steps:

- Clone the repository `$ https://github.com/alexpinel/Dot.git`
- Install Node js and then run `npm install` inside the project repository, you can run `npm install --force` if you face any issues at this stage

Now, it is time to add a full python bundle to the app. The purpose of this is to create a distributable environment with all necessary libraries, if you only plan on using Dot from the console you might not need to follow this particular step but then make sure to replace the python path locations specified in `src/index.js`. Creating the python bundle is covered in detail here: [https://til.simonwillison.net/electron/python-inside-electron](https://til.simonwillison.net/electron/python-inside-electron) , the bundles can also be installed from here: [https://github.com/indygreg/python-build-standalone/releases/tag/20240224](https://github.com/indygreg/python-build-standalone/releases/tag/20240224)

Having created the bundle, please rename it to 'python' and place it inside the `llm` directory. It is now time to get all necessary libraries, keep in mind that running a simple `pip install` will not work without specifying the actual path of the bundle so use this instead: `path/to/python/.bin/or/.exe -m pip install`

Required python libraries:
- pytorch [link](https://pytorch.org/get-started/locally/) (CPU version recommended as it is lighter than GPU)
- langchain [link](https://python.langchain.com/docs/get_started/quickstart)
- FAISS [link](https://python.langchain.com/docs/integrations/vectorstores/faiss)
- HuggingFace [link](https://python.langchain.com/docs/integrations/platforms/huggingface)
- llama.cpp [link](https://github.com/abetlen/llama-cpp-python) (Use CUDA implementation if you have an Nvidia GPU!)
- pypdf [link](https://python.langchain.com/docs/modules/data_connection/document_loaders/pdf)
- docx2txt [link](https://python.langchain.com/docs/integrations/document_loaders/microsoft_word)
- Unstructured [link](https://github.com/Unstructured-IO/unstructured) (Use `pip install "unstructured[pptx, md, xlsx]` for the file formats)

Now python should be setup and running! However, there is still a few more steps left, now is the time to add the final magic to Dot! First, create a folder inside the `llm` directory and name it `mpnet`, there you will need to install sentence-transformers to use for the document embeddings, fetch all the files from the following link and place them inside the new folder: [sentence-transformers/all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2/tree/main)

Finally, download the Mistral 7B LLM from the following link and place it inside the `llm/scripts` directory alongside the python scripts used by Dot: [TheBloke/Mistral-7B-Instruct-v0.2-GGUF](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.2-GGUF/blob/main/mistral-7b-instruct-v0.2.Q4_K_M.gguf)

That's it! If you follow these steps you should be able to get it all running, please let me know if you are facing any issues :)

### Future features I'd like to add:

- Linux support
- Ability to choose LLM
- Image support would be cool
- Increased awarnes of documents apart from merely their content
- Loading individual files instead of selecting a folder (This is really needed, some users get confused by this and I cannot blame them at all)
- Increased security considerations, after all this is the whole point of using a local LLM
- Support for more docs
- Storing file databases, allowing users to quickly switch between groups of files without having to load them all again
- idk, will find out along the way

# Want to help?

Please do! I am a busy student working on this as a side project so help is more than welcome!
19 changes: 15 additions & 4 deletions llm/scripts/bigdot.py
Original file line number Diff line number Diff line change
Expand Up @@ -10,15 +10,26 @@


n_gpu_layers = 1 # Metal set to 1 is enough.
n_batch = 512 # Should be between 1 and n_ctx, consider the amount of RAM of your Apple Silicon Chip.
n_batch = 256 # Should be between 1 and n_ctx, consider the amount of RAM of your Apple Silicon Chip.


# Find the current script's directory
script_dir = os.path.dirname(__file__)
# Specify the desktop path
documents_path = os.path.join(os.path.expanduser("~"), "Documents")

# Specify the folder name
folder_name = "Dot-data"

# Combine the desktop path and folder name
folder_path = os.path.join(documents_path, folder_name)

# Create the folder if it doesn't exist
if not os.path.exists(folder_path):
print('LLM NOT FOUND!')
os.makedirs(folder_path)

# Construct the relative path
relative_model_path = "mistral-7b-instruct-v0.2.Q4_K_M.gguf"
model_path = os.path.join(script_dir, relative_model_path)
model_path = os.path.join(folder_path, relative_model_path)


llm = LlamaCpp(
Expand Down
28 changes: 18 additions & 10 deletions llm/scripts/docdot.py
Original file line number Diff line number Diff line change
Expand Up @@ -14,39 +14,40 @@


# Specify the desktop path
desktop_path = os.path.join(os.path.expanduser("~"), "Documents")
documents_path = os.path.join(os.path.expanduser("~"), "Documents")

# Specify the folder name
folder_name = "Dot-data"
folder_name = "Dot-Data"

# Combine the desktop path and folder name
folder_path = os.path.join(desktop_path, folder_name)
folder_path = os.path.join(documents_path, folder_name)

# Create the folder if it doesn't exist
if not os.path.exists(folder_path):
print('LLM NOT FOUND!')
os.makedirs(folder_path)




current_directory = os.path.dirname(os.path.realpath(__file__))
model_directory = os.path.join(current_directory, '..', 'mpnet')
model_directory = os.path.join(current_directory, '..', 'baai')

#print("Model Directory:", os.path.abspath(model_directory))

### LOAD EMBEDDING SETTINGS
embeddings=HuggingFaceEmbeddings(model_name=model_directory, model_kwargs={'device':'mps'}) # SET TO 'cpu' for PC
vector_store = FAISS.load_local(os.path.join(folder_path, "Dot-data"), embeddings)
n_gpu_layers = 1 # Metal set to 1 is enough.
n_batch = 512 # Should be between 1 and n_ctx, consider the amount of RAM of your Apple Silicon Chip.
n_batch = 256 # Should be between 1 and n_ctx, consider the amount of RAM of your Apple Silicon Chip.


# Find the current script's directory
script_dir = os.path.dirname(__file__)

# Construct the relative path
relative_model_path = "mistral-7b-instruct-v0.2.Q4_K_M.gguf"
model_path = os.path.join(script_dir, relative_model_path)
model_path = os.path.join(folder_path, relative_model_path)


llm = LlamaCpp(
Expand Down Expand Up @@ -100,15 +101,16 @@ def generate_prompt(prompt: str, system_prompt: str = DEFAULT_SYSTEM_PROMPT) ->
def format_response(dictionary):
"""
Formats the response dictionary to:
- Print metadata.
- Print metadata for each document.
- Embed an iframe for PDF documents, attempting to open it at a specified page.
- Display page_content text for Word, Excel, or PowerPoint documents.
- Display the overall result after the document details.
Assumes each document in source_documents is an instance of a Document class.
"""
formatted_result = dictionary["result"]
# Correctly define source_documents from the dictionary
source_documents = dictionary["source_documents"]

sources = "\n\n---\n\n### Source Documents:\n"
sources = "### Source Documents:\n"
for doc in source_documents:
# Safely get the 'source' and 'page' from metadata, default if not found
source_path = doc.metadata.get("source", "Source path not available.")
Expand All @@ -131,7 +133,13 @@ def format_response(dictionary):
page_content_text = doc.page_content.replace('\n', ' ') if doc.page_content else "Page content not available."
sources += f"\n\n{metadata_info}\n{page_content_text}\n\n"

return formatted_result + sources
# Now appending the formatted result at the end
formatted_result = dictionary["result"]
complete_response = sources + "\n\n---\n\n### Result:\n" + formatted_result

return complete_response





Expand Down
4 changes: 2 additions & 2 deletions llm/scripts/embeddings.py
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@
def embeddings(chosen_directory):

current_directory = os.path.dirname(os.path.realpath(__file__))
model_directory = os.path.join(current_directory, '..', 'mpnet')
model_directory = os.path.join(current_directory, '..', 'baai')

print("Model Directory:", os.path.abspath(model_directory))

Expand All @@ -31,7 +31,7 @@ def embeddings(chosen_directory):
desktop_path = os.path.join(os.path.expanduser("~"), "Documents")

# Specify the folder name
folder_name = "Dot-data"
folder_name = "Dot-Data"

# Combine the desktop path and folder name
folder_path = os.path.join(desktop_path, folder_name)
Expand Down
Loading

0 comments on commit 045a66a

Please sign in to comment.