-
how to specify and use a different text embedding model I wanted to change it to FlagEmbedding multilingual-e5-large embed_model=HuggingFaceEmbedding(model_name='BAAI/bge-small-en', embed_batch_size=10, callback_manager=<llama_index.callbacks.base.CallbackManager object at 0x00000299D62FCFA0>, |
Beta Was this translation helpful? Give feedback.
Replies: 7 comments 7 replies
-
It just doesn't work for other languages, I tested it, it doesn't answer the questions |
Beta Was this translation helpful? Give feedback.
-
We are working on a fix for using any HF model. It will be live, soon. Thank you for your patience while we work on that 🙏 |
Beta Was this translation helpful? Give feedback.
-
Its live with the example usage: AutoQueryEngine.from_parameters(embed_model='local:intfloat/multilingual-e5-large') related docstring: |
Beta Was this translation helpful? Give feedback.
-
Generating embeddings: 100%|█████████████████████████████████████████████████████████████████████████| 2594/2594 [11:21<00:00, 3.80it/s] |
Beta Was this translation helpful? Give feedback.
-
I'll try again with a new database |
Beta Was this translation helpful? Give feedback.
-
my script import os from pydantic import BaseModel, Field logging.basicConfig(level=logging.DEBUG) os.environ["OPENROUTER_API_KEY"] = "sk-or-v1-0c8" relative_folder_path = "examples/data" documents = read_files_as_documents(input_dir="examples/data" , recursive=True) service_context_params = { llm_params = { query_engine = AutoQueryEngine.from_parameters( query = "..................." print(response.response) def greet(query): demo = gr.Interface(fn=greet, inputs="text", outputs="text") |
Beta Was this translation helpful? Give feedback.
-
Is it possible to run Generating embeddings asynchronously? |
Beta Was this translation helpful? Give feedback.
Its live with the
v0.0.18
release @seoeaa ! #109example usage:
related docstring:
https://github.com/safevideo/autollm/blob/6b0b6104fcc1d04a5cf400b08f4eae33096ca535/autollm/auto/query_engine.py#L129-L130