Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error: OpenAI or Azure OpenAI API key or Token Provider not found with Ollama #68

Closed
donsimn opened this issue May 29, 2024 · 1 comment
Assignees
Labels
question Further information is requested

Comments

@donsimn
Copy link

donsimn commented May 29, 2024

I'm just trying out embedjs with my local Ollama server. I wrote this super simple code which fails:

const { RAGApplicationBuilder, Ollama } = await import('@llm-tools/embedjs');
const { LanceDb } = await import('@llm-tools/embedjs/vectorDb/lance');

const rag = await new RAGApplicationBuilder()
    .setModel(new Ollama({
        modelName: "llama3",
        baseUrl: 'http://localhost:11434'
    })).setVectorDb(new LanceDb({ path: 'lance-', isTemp: true }))
    .build();

console.log(await rag.query("Hello"));

The error:

file:///C:/Users/simon/Documents/GitHub/william-yeye/node_modules/@langchain/openai/dist/embeddings.js:128
            throw new Error("OpenAI or Azure OpenAI API key or Token Provider not found");
                  ^

Error: OpenAI or Azure OpenAI API key or Token Provider not found
    at new OpenAIEmbeddings (file:///C:/Users/simon/Documents/GitHub/william-yeye/node_modules/@langchain/openai/dist/embeddings.js:128:19)
    at new OpenAi3SmallEmbeddings (file:///C:/Users/simon/Documents/GitHub/william-yeye/node_modules/@llm-tools/embedjs/dist/embeddings/openai-3small-embeddings.js:10:22)
    at new RAGApplication (file:///C:/Users/simon/Documents/GitHub/william-yeye/node_modules/@llm-tools/embedjs/dist/core/rag-application.js:69:61)
    at RAGApplicationBuilder.build (file:///C:/Users/simon/Documents/GitHub/william-yeye/node_modules/@llm-tools/embedjs/dist/core/rag-application-builder.js:71:24)
    at file:///C:/Users/simon/Documents/GitHub/william-yeye/dist/index.js:8:6
@adhityan
Copy link
Collaborator

Yes, this is expected behaviour. Basically, you need two things to run a RAG stack -

  1. LLM
  2. Embedding model

In your case, you are using Ollama as the LLM and by default (unless you specify otherwise), the library uses OpenAI's LargeEmedding as its embedding model. The error you see is coming from the embedding model unable to reach out to OpenAI. Right now, there is no support for local embedding models via Ollama (only Ollama based LLMs are supported) but there is a plan to add it soon. There are other embedding models to choose from though.

@adhityan adhityan added the question Further information is requested label May 29, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

2 participants