Skip to content

Commit

Permalink
docs: self-query consistency (langchain-ai#10502)
Browse files Browse the repository at this point in the history
The `self-que[ring`
navbar](https://python.langchain.com/docs/modules/data_connection/retrievers/self_query/)
has repeated `self-quering` repeated in each menu item. I've simplified
it to be more readable
- removed `self-quering` from a title of each page;
- added description to the vector stores
- added description and link to the Integration Card
(`integrations/providers`) of the vector stores when they are missed.
  • Loading branch information
leo-gan authored Sep 13, 2023
1 parent 415d38a commit f4e6eac
Show file tree
Hide file tree
Showing 18 changed files with 332 additions and 210 deletions.
19 changes: 12 additions & 7 deletions docs/extras/integrations/providers/milvus.mdx
Original file line number Diff line number Diff line change
@@ -1,20 +1,25 @@
# Milvus

This page covers how to use the Milvus ecosystem within LangChain.
It is broken into two parts: installation and setup, and then references to specific Milvus wrappers.
>[Milvus](https://milvus.io/docs/overview.md) is a database that stores, indexes, and manages
> massive embedding vectors generated by deep neural networks and other machine learning (ML) models.

## Installation and Setup
- Install the Python SDK with `pip install pymilvus`
## Wrappers

### VectorStore
Install the Python SDK:

```bash
pip install pymilvus
```

## Vector Store

There exists a wrapper around Milvus indexes, allowing you to use it as a vectorstore,
There exists a wrapper around `Milvus` indexes, allowing you to use it as a vectorstore,
whether for semantic search or example selection.

To import this vectorstore:
```python
from langchain.vectorstores import Milvus
```

For a more detailed walkthrough of the Miluvs wrapper, see [this notebook](/docs/integrations/vectorstores/milvus.html)
For a more detailed walkthrough of the `Miluvs` wrapper, see [this notebook](/docs/integrations/vectorstores/milvus.html)
8 changes: 5 additions & 3 deletions docs/extras/integrations/providers/pinecone.mdx
Original file line number Diff line number Diff line change
@@ -1,16 +1,18 @@
# Pinecone

This page covers how to use the Pinecone ecosystem within LangChain.
It is broken into two parts: installation and setup, and then references to specific Pinecone wrappers.
>[Pinecone](https://docs.pinecone.io/docs/overview) is a vector database with broad functionality.

## Installation and Setup

Install the Python SDK:

```bash
pip install pinecone-client
```


## Vectorstore
## Vector store

There exists a wrapper around Pinecone indexes, allowing you to use it as a vectorstore,
whether for semantic search or example selection.
Expand Down
19 changes: 13 additions & 6 deletions docs/extras/integrations/providers/qdrant.mdx
Original file line number Diff line number Diff line change
@@ -1,15 +1,22 @@
# Qdrant

This page covers how to use the Qdrant ecosystem within LangChain.
It is broken into two parts: installation and setup, and then references to specific Qdrant wrappers.
>[Qdrant](https://qdrant.tech/documentation/) (read: quadrant) is a vector similarity search engine.
> It provides a production-ready service with a convenient API to store, search, and manage
> points - vectors with an additional payload. `Qdrant` is tailored to extended filtering support.

## Installation and Setup
- Install the Python SDK with `pip install qdrant-client`
## Wrappers

### VectorStore
Install the Python SDK:

```bash
pip install qdrant-client
```


## Vector Store

There exists a wrapper around Qdrant indexes, allowing you to use it as a vectorstore,
There exists a wrapper around `Qdrant` indexes, allowing you to use it as a vectorstore,
whether for semantic search or example selection.

To import this vectorstore:
Expand Down
16 changes: 12 additions & 4 deletions docs/extras/integrations/providers/redis.mdx
Original file line number Diff line number Diff line change
@@ -1,26 +1,34 @@
# Redis

>[Redis](https://redis.com) is an open-source key-value store that can be used as a cache,
> message broker, database, vector database and more.
This page covers how to use the [Redis](https://redis.com) ecosystem within LangChain.
It is broken into two parts: installation and setup, and then references to specific Redis wrappers.

## Installation and Setup
- Install the Redis Python SDK with `pip install redis`

Install the Python SDK:

```bash
pip install redis
```

## Wrappers

All wrappers needing a redis url connection string to connect to the database support either a stand alone Redis server
All wrappers need a redis url connection string to connect to the database support either a stand alone Redis server
or a High-Availability setup with Replication and Redis Sentinels.

### Redis Standalone connection url
For standalone Redis server the official redis connection url formats can be used as describe in the python redis modules
For standalone `Redis` server, the official redis connection url formats can be used as describe in the python redis modules
"from_url()" method [Redis.from_url](https://redis-py.readthedocs.io/en/stable/connections.html#redis.Redis.from_url)

Example: `redis_url = "redis://:secret-pass@localhost:6379/0"`

### Redis Sentinel connection url

For [Redis sentinel setups](https://redis.io/docs/management/sentinel/) the connection scheme is "redis+sentinel".
This is an un-offical extensions to the official IANA registered protocol schemes as long as there is no connection url
This is an unofficial extensions to the official IANA registered protocol schemes as long as there is no connection url
for Sentinels available.

Example: `redis_url = "redis+sentinel://:secret-pass@sentinel-host:26379/mymaster/0"`
Expand Down
12 changes: 6 additions & 6 deletions docs/extras/integrations/providers/vectara/index.mdx
Original file line number Diff line number Diff line change
@@ -1,27 +1,27 @@
# Vectara


What is Vectara?
>[Vectara](https://docs.vectara.com/docs/) is a GenAI platform for developers. It provides a simple API to build Grounded Generation
>(aka Retrieval-augmented-generation or RAG) applications.
**Vectara Overview:**
- Vectara is developer-first API platform for building GenAI applications
- `Vectara` is developer-first API platform for building GenAI applications
- To use Vectara - first [sign up](https://console.vectara.com/signup) and create an account. Then create a corpus and an API key for indexing and searching.
- You can use Vectara's [indexing API](https://docs.vectara.com/docs/indexing-apis/indexing) to add documents into Vectara's index
- You can use Vectara's [Search API](https://docs.vectara.com/docs/search-apis/search) to query Vectara's index (which also supports Hybrid search implicitly).
- You can use Vectara's integration with LangChain as a Vector store or using the Retriever abstraction.

## Installation and Setup
To use Vectara with LangChain no special installation steps are required.

To use `Vectara` with LangChain no special installation steps are required.
To get started, follow our [quickstart](https://docs.vectara.com/docs/quickstart) guide to create an account, a corpus and an API key.
Once you have these, you can provide them as arguments to the Vectara vectorstore, or you can set them as environment variables.

- export `VECTARA_CUSTOMER_ID`="your_customer_id"
- export `VECTARA_CORPUS_ID`="your_corpus_id"
- export `VECTARA_API_KEY`="your-vectara-api-key"

## Usage

### VectorStore
## Vector Store

There exists a wrapper around the Vectara platform, allowing you to use it as a vectorstore, whether for semantic search or example selection.

Expand Down
21 changes: 13 additions & 8 deletions docs/extras/integrations/providers/weaviate.mdx
Original file line number Diff line number Diff line change
@@ -1,10 +1,10 @@
# Weaviate

This page covers how to use the Weaviate ecosystem within LangChain.
>[Weaviate](https://weaviate.io/) is an open-source vector database. It allows you to store data objects and vector embeddings from
>your favorite ML models, and scale seamlessly into billions of data objects.
What is Weaviate?

**Weaviate in a nutshell:**
What is `Weaviate`?
- Weaviate is an open-source ​database of the type ​vector search engine.
- Weaviate allows you to store JSON documents in a class property-like fashion while attaching machine learning vectors to these documents to represent them in vector space.
- Weaviate can be used stand-alone (aka bring your vectors) or with a variety of modules that can do the vectorization for you and extend the core capabilities.
Expand All @@ -14,15 +14,20 @@ What is Weaviate?

**Weaviate in detail:**

Weaviate is a low-latency vector search engine with out-of-the-box support for different media types (text, images, etc.). It offers Semantic Search, Question-Answer Extraction, Classification, Customizable Models (PyTorch/TensorFlow/Keras), etc. Built from scratch in Go, Weaviate stores both objects and vectors, allowing for combining vector search with structured filtering and the fault tolerance of a cloud-native database. It is all accessible through GraphQL, REST, and various client-side programming languages.
`Weaviate` is a low-latency vector search engine with out-of-the-box support for different media types (text, images, etc.). It offers Semantic Search, Question-Answer Extraction, Classification, Customizable Models (PyTorch/TensorFlow/Keras), etc. Built from scratch in Go, Weaviate stores both objects and vectors, allowing for combining vector search with structured filtering and the fault tolerance of a cloud-native database. It is all accessible through GraphQL, REST, and various client-side programming languages.

## Installation and Setup
- Install the Python SDK with `pip install weaviate-client`
## Wrappers

### VectorStore
Install the Python SDK:

There exists a wrapper around Weaviate indexes, allowing you to use it as a vectorstore,
```bash
pip install weaviate-client
```


## Vector Store

There exists a wrapper around `Weaviate` indexes, allowing you to use it as a vectorstore,
whether for semantic search or example selection.

To import this vectorstore:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -6,11 +6,14 @@
"id": "13afcae7",
"metadata": {},
"source": [
"# Deep Lake self-querying \n",
"# Deep Lake\n",
"\n",
">[Deep Lake](https://www.activeloop.ai) is a multimodal database for building AI applications.\n",
">[Deep Lake](https://www.activeloop.ai) is a multimodal database for building AI applications\n",
">[Deep Lake](https://github.com/activeloopai/deeplake) is a database for AI.\n",
">Store Vectors, Images, Texts, Videos, etc. Use with LLMs/LangChain. Store, query, version,\n",
"> & visualize any AI data. Stream data in real time to PyTorch/TensorFlow.\n",
"\n",
"In the notebook we'll demo the `SelfQueryRetriever` wrapped around a Deep Lake vector store. "
"In the notebook, we'll demo the `SelfQueryRetriever` wrapped around a `Deep Lake` vector store. "
]
},
{
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -5,11 +5,11 @@
"id": "13afcae7",
"metadata": {},
"source": [
"# Chroma self-querying \n",
"# Chroma\n",
"\n",
">[Chroma](https://docs.trychroma.com/getting-started) is a database for building AI applications with embeddings.\n",
"\n",
"In the notebook we'll demo the `SelfQueryRetriever` wrapped around a Chroma vector store. "
"In the notebook, we'll demo the `SelfQueryRetriever` wrapped around a `Chroma` vector store. "
]
},
{
Expand Down Expand Up @@ -447,7 +447,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.6"
"version": "3.10.12"
}
},
"nbformat": 4,
Expand Down
Loading

0 comments on commit f4e6eac

Please sign in to comment.