Skip to content

Commit

Permalink
Added OllamaEmbeddings component with documentation (langflow-ai#1309)
Browse files Browse the repository at this point in the history
Ollama embeddings are useful to enhance langflow's support of Ollama,
allowing users to run LLMs such as Mistral and LLama locally. Langchain
documentation can be found via [this
link](https://python.langchain.com/docs/integrations/text_embedding/ollama).

Changes:

- New `OllamaEmbeddingsComponent` class
- Associated documentation in the `Embeddings` section
  • Loading branch information
ogabrielluiz authored Jan 11, 2024
2 parents 78c32b9 + 37ced42 commit 13fdc62
Show file tree
Hide file tree
Showing 3 changed files with 58 additions and 5 deletions.
20 changes: 15 additions & 5 deletions docs/docs/components/embeddings.mdx
Original file line number Diff line number Diff line change
@@ -1,11 +1,13 @@
import Admonition from '@theme/Admonition';
import Admonition from "@theme/Admonition";

# Embeddings

<Admonition type="caution" icon="🚧" title="ZONE UNDER CONSTRUCTION">
<p>
We appreciate your understanding as we polish our documentation – it may contain some rough edges. Share your feedback or report issues to help us improve! 🛠️📝
</p>
<p>
We appreciate your understanding as we polish our documentation – it may
contain some rough edges. Share your feedback or report issues to help us
improve! 🛠️📝
</p>
</Admonition>

Embeddings are vector representations of text that capture the semantic meaning of the text. They are created using text embedding models and allow us to think about the text in a vector space, enabling us to perform tasks like semantic search, where we look for pieces of text that are most similar in the vector space.
Expand Down Expand Up @@ -110,4 +112,12 @@ Vertex AI is a cloud computing platform offered by Google Cloud Platform (GCP).
- **top_k:** How the model selects tokens for output, the next token is selected from – defaults to `40`.
- **top_p:** Tokens are selected from most probable to least until the sum of their – defaults to `0.95`.
- **tuned_model_name:** The name of a tuned model. If provided, model_name is ignored.
- **verbose:** This parameter is used to control the level of detail in the output of the chain. When set to True, it will print out some internal states of the chain while it is being run, which can help debug and understand the chain's behavior. If set to False, it will suppress the verbose output – defaults to `False`.
- **verbose:** This parameter is used to control the level of detail in the output of the chain. When set to True, it will print out some internal states of the chain while it is being run, which can help debug and understand the chain's behavior. If set to False, it will suppress the verbose output – defaults to `False`.

### OllamaEmbeddings

Used to load [Ollama’s](https://ollama.ai/) embedding models. Wrapper around LangChain's [Ollama API](https://python.langchain.com/docs/integrations/text_embedding/ollama).

- **model** The name of the Ollama model to use – defaults to `llama2`.
- **base_url** The base URL for the Ollama API – defaults to `http://localhost:11434`.
- **temperature** Tunes the degree of randomness in text generations. Should be a non-negative value – defaults to `0`.
41 changes: 41 additions & 0 deletions src/backend/langflow/components/embeddings/OllamaEmbeddings.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,41 @@
from typing import Optional

from langflow import CustomComponent
from langchain.embeddings.base import Embeddings
from langchain_community.embeddings import OllamaEmbeddings

class OllamaEmbeddingsComponent(CustomComponent):
"""
A custom component for implementing an Embeddings Model using Ollama.
"""

display_name: str = "Ollama Embeddings"
description: str = "Embeddings model from Ollama."
documentation = "https://python.langchain.com/docs/integrations/text_embedding/ollama"
beta = True

def build_config(self):
return {
"model": {
"display_name": "Ollama Model",
},
"base_url": {"display_name": "Ollama Base URL"},
"temperature": {"display_name": "Model Temperature"},
"code": {"show": False},
}

def build(
self,
model: str = "llama2",
base_url: str = "http://localhost:11434",
temperature: Optional[float] = None,
) -> Embeddings:
try:
output = OllamaEmbeddings(
model=model,
base_url=base_url,
temperature=temperature
) # type: ignore
except Exception as e:
raise ValueError("Could not connect to Ollama API.") from e
return output
2 changes: 2 additions & 0 deletions src/backend/langflow/config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -106,6 +106,8 @@ embeddings:
documentation: "https://python.langchain.com/docs/modules/data_connection/text_embedding/integrations/google_vertex_ai_palm"
AmazonBedrockEmbeddings:
documentation: "https://python.langchain.com/docs/modules/data_connection/text_embedding/integrations/bedrock"
OllamaEmbeddings:
documentation: "https://python.langchain.com/docs/modules/data_connection/text_embedding/integrations/ollama"

llms:
OpenAI:
Expand Down

0 comments on commit 13fdc62

Please sign in to comment.