You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
file:///C:/Users/simon/Documents/GitHub/william-yeye/node_modules/@langchain/openai/dist/embeddings.js:128
throw new Error("OpenAI or Azure OpenAI API key or Token Provider not found");
^
Error: OpenAI or Azure OpenAI API key or Token Provider not found
at new OpenAIEmbeddings (file:///C:/Users/simon/Documents/GitHub/william-yeye/node_modules/@langchain/openai/dist/embeddings.js:128:19)
at new OpenAi3SmallEmbeddings (file:///C:/Users/simon/Documents/GitHub/william-yeye/node_modules/@llm-tools/embedjs/dist/embeddings/openai-3small-embeddings.js:10:22)
at new RAGApplication (file:///C:/Users/simon/Documents/GitHub/william-yeye/node_modules/@llm-tools/embedjs/dist/core/rag-application.js:69:61)
at RAGApplicationBuilder.build (file:///C:/Users/simon/Documents/GitHub/william-yeye/node_modules/@llm-tools/embedjs/dist/core/rag-application-builder.js:71:24)
at file:///C:/Users/simon/Documents/GitHub/william-yeye/dist/index.js:8:6
The text was updated successfully, but these errors were encountered:
Yes, this is expected behaviour. Basically, you need two things to run a RAG stack -
LLM
Embedding model
In your case, you are using Ollama as the LLM and by default (unless you specify otherwise), the library uses OpenAI's LargeEmedding as its embedding model. The error you see is coming from the embedding model unable to reach out to OpenAI. Right now, there is no support for local embedding models via Ollama (only Ollama based LLMs are supported) but there is a plan to add it soon. There are other embedding models to choose from though.
I'm just trying out embedjs with my local Ollama server. I wrote this super simple code which fails:
The error:
The text was updated successfully, but these errors were encountered: