You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, The usage of the local Ollama LLM should not use the open ai or azure openai LLM
But currently the error Error: OpenAI or Azure OpenAI API key or Token Provider not found is thrown while attempting to run the Ollama example.
The text was updated successfully, but these errors were encountered:
This is actually not a bug. Basically, you need two things to run a RAG stack -
LLM
Embedding model
In your case, you are using Ollama as the LLM and by default (unless you specify otherwise), the library uses OpenAI's LargeEmedding as its embedding model.
The error you see is coming from the large embedding model unable to reach out to OpenAI. Right now, there is no support for local embedding models via Ollama (only Ollama based LLMs are supported) but there is a plan to add it soon. There are other embedding models to choose from though; refer the documentation on that.
Currently the roadmap includes adding support for Ollama based local embedding models but it is not expected to not be available before Q3 start. If you are interested, you can contribute a PR with a non API Key based embedded model and I will prioritize merging it.
Hi, The usage of the local Ollama LLM should not use the open ai or azure openai LLM
But currently the error
Error: OpenAI or Azure OpenAI API key or Token Provider not found
is thrown while attempting to run the Ollama example.The text was updated successfully, but these errors were encountered: