-
-
Notifications
You must be signed in to change notification settings - Fork 251
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[FR] Allow custom OpenAi api base url #943
Comments
We are using langchain python package to build clients for OpenAI, so it might be possible to use OPENAI_BASE_URL to switch the url. Will need time to experiment if this will work properly. The biggest issue, is whether the current compatibility mode available (example: one that is provided by Ollama) can actually accept the current request payload from AppFlowy AI service without any changes at all. If it is only a proxy server that relay the request to OpenAI, then this will probably work. Reference for the latest API doc: Specifically, this line:
|
Was spending some time looking into this: so as it is right now, even using |
This might be difficult to solve: we are using OpenAIEmbeddings, and for some reason, pydantic validation failed when open api key is not supplied, similar to the issue reported here: langchain-ai/langchain#7251 And validation of open api key is not sent to the proxy or alternate base url, but the actual Open AI endpoint: platform.openai.com. I assume (apologize in advanced if i am wrong) that the primary motivation of having a custom OpenAI api base url, is to use Ollama that is hosted on a server with OpenAI compatibility. For such use case, it doesn't appear that having an alternate base url can resolve the issue. Instead, actual support for Ollama-based embeddings is needed in appflowy ai service: https://python.langchain.com/docs/integrations/text_embedding/ollama/ |
I support this. Some users might decide to use alternative providers, such as Anthropic, Google, LLM aggregators like OpenRouter, or even locally deployed models (that support OpenAI-compatible endpoints) to avoid dependency on third-party services. |
1~3 main use cases of the proposed feature
For now, it looks like the ai container only requests to OpenAI by api.openai.com. It would be great if the base url could be customized.
The text was updated successfully, but these errors were encountered: