Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ollama feature branch #48

Merged
merged 4 commits into from
May 10, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
123 changes: 71 additions & 52 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -47,58 +47,65 @@ The author(s) are looking to add core maintainers for this opensource project. R

# Contents

- [Getting started](#getting-started)
- [Installation](#installation)
- [Usage](#usage)
- [Temperature](#temperature)
- [Search results count](#search-results-count)
- [Customize the prompt](#customize-the-prompt)
- [Dry run](#get-context)
- [Loaders supported](#loaders-supported)
- [PDF](#pdf-file)
- [Youtube](#youtube-video)
- [Youtube channels](#youtube-channel)
- [Youtube search](#youtube-search)
- [Web page](#web-page)
- [Confluence](#confluence)
- [Sitemap](#sitemap)
- [Text](#text)
- [Custom loader](#add-a-custom-loader)
- [How to request more loaders](#more-loaders-coming-soon)
- [LLMs](#llms)
- [OpenAI](#openai)
- [Azure OpenAI](#azure-openai)
- [Mistral](#mistral)
- [Hugging Face](#hugging-face)
- [Anthropic](#anthropic)
- [Vertex AI](#vertex-ai)
- [Bring your own LLMs](#use-custom-llm-model)
- [Request support for new LLMs](#more-llms-coming-soon)
- [Embedding Models](#embedding-models)
- [OpenAI v3 Small](#openai-v3-small)
- [OpenAI v3 Large](#openai-v3-large)
- [ADA](#ada)
- [Cohere](#cohere)
- [Gecko Embedding](#gecko-embedding)
- [Custom embedding models](#use-custom-embedding-model)
- [Request support for embedding models](#more-embedding-models-coming-soon)
- [Vector databases supported](#vector-databases-supported)
- [Pinecone](#pinecone)
- [LanceDB](#lancedb)
- [Chroma](#chroma)
- [HNSWLib](#hnswlib)
- [Weaviate](#weaviate)
- [Qdrant](#qdrant)
- [Own Database](#bring-your-own-database)
- [How to request new vector databases](#more-databases-coming-soon)
- [Caches](#caches)
- [Redis](#redis)
- [LMDB File](#lmdb)
- [In memory cache](#inmemory)
- [Custom cache implementation](#bring-your-own-cache)
- [How to request new cache providers](#more-caches-coming-soon)
- [Sample projects](#sample-projects)
- [Contributors](#contributors)
- [EmbedJs](#embedjs)
- [Features](#features)
- [Quick note](#quick-note)
- [Contents](#contents)
- [Getting started](#getting-started)
- [Installation](#installation)
- [Usage](#usage)
- [Temperature](#temperature)
- [Search results count](#search-results-count)
- [Customize the prompt](#customize-the-prompt)
- [Get context (dry run)](#get-context-dry-run)
- [Get count of embedded chunks](#get-count-of-embedded-chunks)
- [Loaders supported](#loaders-supported)
- [Youtube video](#youtube-video)
- [Youtube channel](#youtube-channel)
- [Youtube search](#youtube-search)
- [PDF file](#pdf-file)
- [Web page](#web-page)
- [Confluence](#confluence)
- [Sitemap](#sitemap)
- [Text](#text)
- [Add a custom loader](#add-a-custom-loader)
- [More loaders coming soon](#more-loaders-coming-soon)
- [LLMs](#llms)
- [OpenAI](#openai)
- [Azure OpenAI](#azure-openai)
- [Mistral](#mistral)
- [Hugging Face](#hugging-face)
- [Anthropic](#anthropic)
- [Vertex AI](#vertex-ai)
- [Ollama](#ollama)
- [Use custom LLM model](#use-custom-llm-model)
- [More LLMs coming soon](#more-llms-coming-soon)
- [Embedding models](#embedding-models)
- [OpenAI v3 Small](#openai-v3-small)
- [OpenAI v3 Large](#openai-v3-large)
- [Ada](#ada)
- [Cohere](#cohere)
- [Gecko Embedding](#gecko-embedding)
- [Use custom embedding model](#use-custom-embedding-model)
- [More embedding models coming soon](#more-embedding-models-coming-soon)
- [Vector databases supported](#vector-databases-supported)
- [Pinecone](#pinecone)
- [LanceDB](#lancedb)
- [Chroma](#chroma)
- [HNSWLib](#hnswlib)
- [Weaviate](#weaviate)
- [Qdrant](#qdrant)
- [Bring your own database](#bring-your-own-database)
- [More databases coming soon](#more-databases-coming-soon)
- [Caches](#caches)
- [LMDB](#lmdb)
- [InMemory](#inmemory)
- [Redis](#redis)
- [Bring your own cache](#bring-your-own-cache)
- [More caches coming soon](#more-caches-coming-soon)
- [Langsmith Integration](#langsmith-integration)
- [Sample projects](#sample-projects)
- [Contributors](#contributors)

# Getting started

Expand Down Expand Up @@ -484,6 +491,18 @@ const ragApplication = await new RAGApplicationBuilder()
See also `/examples/vertexai` for [further documentation](/examples/vertexai/README.md) about authentication options and how to use it.


## Ollama

Locally running Ollama models are supported now. Installation instructions can be found from: [https://ollama.com/](https://ollama.com/). For the first time, execute `ollama run <modelname>` and use that model in the `Ollama` constructor as a `modelName`. Default port in which Ollama runs, is '11434', but if for some reason you use something else, you can pass `baseUrl` with the port number as the second argument:

```TS
const ragApplication = await new RAGApplicationBuilder()
.setModel(new Ollama({
modelName: "llama3",
baseUrl: 'http://localhost:11434'
}))
```

## Use custom LLM model

You can use a custom LLM model by implementing the `BaseModel` interface. Here's how that would look like -
Expand Down
27 changes: 27 additions & 0 deletions examples/ollama/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,27 @@
## Requirements

This example consists of a Node.js application that uses vector embeddings with `embedJs` RAG library to store text from various sources to database, retrieve them with similarity search and interpret with Ollama LLM.

Main motivation is on the open-source and local running of the RAG application.

### Install NodeJS dependencies

```bash
npm install
```

### Tesla example

You have to had installed ollama ([https://ollama.com/](https://ollama.com/)) and run at least once:

```bash
ollama run llama3
```

Run the "Tesla text" retrieval simple example with default parameters:

```bash
npm start -- llama3
```

That will output similarity search results interpreted by local Ollama llama3 LLM after the content has been first retrieved from internet and indexed to the in-memory vector database.
195 changes: 195 additions & 0 deletions examples/ollama/package-lock.json

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

17 changes: 17 additions & 0 deletions examples/ollama/package.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
{
"name": "ollama",
"version": "1.0.0",
"type": "module",
"private": true,
"scripts": {
"start": "tsc && node dist/examples/ollama/src/index.js"
},
"author": "",
"license": "ISC",
"dependencies": {
"ts-node": "^10.9.2"
},
"devDependencies": {
"@types/node": "^20.11.24"
}
}
Loading