-
Notifications
You must be signed in to change notification settings - Fork 59.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Feature] Plans to add model provider support #4030
Comments
是否有计划支持 mistral.ai |
Are there any plans to support mistral.ai? |
@WBinBin001 you can use mistral through Ollama https://docs.nextchat.dev/models/ollama |
是否会支持 mistral.ai 平台的 api 和 key |
Will it support the api and key of the mistral.ai platform? |
The Mistral AI API models, which include Mistral-small-latest, Mistral-medium-latest, and Mistral-large-latest, particularly the Mistral Large model, have been ranked second, only behind GPT-4, which has an MMLU score of 86.4%. The Mistral Large model achieved a score of 81.2%, surpassing even GPT-4 Turbo, which scored 80.48%. This makes the model particularly interesting, and I support its inclusion in the most popular cross-platform chatbots, like ChatGPTNextWeb. |
Looking forward to supporting Claude 3 |
Vote for moonshot |
I think that some functions should not be implemented in this repo. Different LLM backends can standardize the API through xusenlinzy/api-for-open-llm or BerriAI/litellm. ChatGPTNextWeb only needs to focus on the functionality of setting URLs and models for different conversations. |
Kimi is awesome. Support it! |
希望支持 AWS Bedrock |
claude is supported in PR: #4457 |
I'd really like to see Gemini Pro 1.5 added. |
考虑接入腾讯云的混元大模型嘛 |
Are you considering connecting to Tencent Cloud’s Hunyuan large model? |
Will images / files upload support be included in v3? |
This comment was marked as duplicate.
This comment was marked as duplicate.
智谱还没接入进来... |
Zhipu has not been connected yet... |
There have been many discussions in the community regarding support for multiple models.
failed to fetch
#3431Here, we will gather NextChat's current support plans for different models and provide dynamic updates on the overall progress.
Firstly, we expect to separate the model-related logic from the frontend and may consider creating a separate JavaScript component to standardize it (this could be managed as an independent package). Afterwards, we will develop adapters for each model based on this component/package. We anticipate that each adapter will have at least the following basic capabilities: multimodality (text, images), token billing, and customizable model parameters (temperature, max_tokens, etc.).
We have roughly divided the work into the following parts:
NextChat UI Separation
Implementation of Multi-Model Providers
Local Model Manager
Server-Side Multi-Model Service
Current implementation:
The text was updated successfully, but these errors were encountered: