Skip to content

fzliu/radient

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

16 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Radient

Radient is a developer-friendly, lightweight library for vectorization, i.e. turning data into embeddings. Radient supports many data types, not just text.

$ pip install radient

Why Radient?

In applications that leverage RAG, vector databases are commonly used as a way to retrieve relevant content that is relevant to the query. It's become so popular that "traditional" database vendors are rushing to support vector search. (Anybody see those funky Singlestore ads on US-101?)

Although still predominantly used for text today, vectors will be used extensively across a variety of different modalities in the upcoming months. This evolution is being powered by two independent occurrences: 1) the shift from large language models to large multimodal models (such as GPT-4o, Reka, and Fuyu), and 2) the rise in adoption for "traditional" tasks such as recommendation and semantic search. In short, vectors are going mainstream, and we need a way to vectorize everything, not just text.

Full write-up on Radient will come later, along with more sample applications, so stay tuned.

Getting started

Vectorization can be performed as follows:

>>> from radient import text_vectorizer
>>> vectorizer = text_vectorizer()
>>> vectorizer.vectorize("Hello, world!")
Vector([-3.21440510e-02, -5.10351397e-02,  3.69579718e-02, ...])

The above snippet vectorizes the string "Hello, world!" using a default model, namely bge-small-en-v1.5 from sentence-transformers. If your Python environment does not contain the sentence-transformers library, Radient prompt you for it:

>>> vectorizer = text_vectorizer()
Vectorizer requires sentence-transformers. Install? [Y/n]

You can type "Y" to have Radient install it for you automatically.

Each vectorizer can take a method parameter along with optional keyword arguments which get passed directly to the underlying vectorization library. For example, we can pick a specific model from the sentence-transformers library using:

>>> vectorizer_mbai = text_vectorizer(method="sbert", model_name_or_path="mixedbread-ai/mxbai-embed-large-v1")
>>> vectorizer_mbai.vectorize("Hello, world!")
Vector([ 0.01729078,  0.04468533,  0.00055427, ...])

This will use Mixbread AI's mxbai-embed-large-v1 model to perform vectorization.

More than just text

With Radient, you're not limited to text. Audio, graphs, images, and molecules can be vectorized as well:

>>> from pathlib import Path
>>> from radient import audio_vectorizer, graph_vectorizer, image_vectorizer, molecule_vectorizer
>>> audio_vectorizer().vectorize(str(Path.home() / "audio.wav"))
Vector([-5.26519306e-03, -4.55586426e-03,  1.79212391e-02, ...])
>>> graph_vectorizer().vectorize(nx.karate_club_graph())
[Vector([ 2.16479279e-01, -2.39208999e-02, -4.14670670e-02, ...]),
 Vector([ 2.29488305e-01, -2.78161774e-02, -3.32570679e-02, ...]),
 ...
 Vector([ 0.04171451,  0.19261454, -0.05810466,])]
>>> image_vectorizer().vectorize(str(Path.home() / "image.jpg"))
Vector([0.00898108, 0.02274677, 0.00100744, ...])
>>> molecule_vectorizer().vectorize("O=C=O")  # O=C=O == SMILES string for CO2
Vector([False, False, False, ...])

A partial list of methods and optional kwargs supported by each modality can be found here.

Sources and sinks

You can attach metadata to the resulting embeddings and store them in sinks. Radient currently supports Milvus:

>>> vector = vectorizer.vectorize("My name is Slim Shady")
>>> vector.add_key_value("artist", "Eminem")  # {"artist": "Eminem"}
>>> vector.store(collection_name="_radient", field_name="vector")
{'insert_count': 1, 'ids': [449662764050547785]}

This will store the vector in a Milvus instance at http://localhost:19530 by default; if the specified collection does not exist at this URI, it will create it (with dynamic schema turned on for flexibility). You can change the desired Milvus instance by specifying the milvus_uri parameter. This works with Zilliz Cloud instances too, e.g. vector.store(milvus_uri="https://in01-dd7f98cd6b900f6.aws-us-west-2.vectordb.zillizcloud.com:19530").

Radient in production

For production use cases with large quantities of data, performance is key. Radient provides an accelerate function to optimize some vectorizers on-the-fly:

>>> vectorizer.vectorize("Hello, world!")  # runtime: ~32ms
Vector([-3.21440510e-02, -5.10351397e-02,  3.69579718e-02, ...])
>>> vectorizer.accelerate()
>>> vectorizer.vectorize("Hello, world!")  # runtime: ~17ms
Vector([-3.21440622e-02, -5.10351285e-02,  3.69579904e-02, ...])

Supported libraries

Radient builds atop work from the broader ML community. Most vectorizers come from other libraries:

A massive thank you to all the creators and maintainers of these libraries.

Coming soon™

A couple of features slated for the near-term (hopefully):

  • Sparse, binary, and multi-vector support
  • Support all relevant embedding models on Huggingface, e.g. non-seq2seq models
  • Data sources from object storage, Google Drive, Box, etc
  • Vector sinks to Zilliz, Databricks, Confluent, etc
  • Creating flows to tie sources, operators, vectorizers, and sinks together