Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature Request]: 关于模型名称展示的困惑 #4997

Closed
manjieqi opened this issue Jul 13, 2024 · 5 comments
Closed

[Feature Request]: 关于模型名称展示的困惑 #4997

manjieqi opened this issue Jul 13, 2024 · 5 comments
Labels
enhancement New feature or request

Comments

@manjieqi
Copy link

Problem Description

image

CUSTOM_MODELS=-all,+gpt-3.5-turbo,+gpt-4,+gpt-4o,+gemini-1.5-pro,+claude-3-5-sonnet-20240620=claude-3.5-sonnet,+llama3-70b-8192=llama3-70b,+glm-4,+qwen-long,+deepseek-chat

一种厂商的API对应着它的模型名称,它的请求方式有些是被ChatGPT-Next-Web原生支持的,但是这里又没有显示对应的厂商,比如gemini-1.5-pro,qwen-long,另外还有没有被支持的厂商的模型。

由于不是所有模型名称都被ChatGPT-Next-Web预置,或者预置的模型不被中转厂商支持,就有必要填写CUSTOM_MODELS,readme里面使用@可以指定厂商,但这里并没有说清楚,这里指定是作为展示,还是使用对应的请求方式,并且目前只有两个模式?

Solution Description

提供一个支持列表,包括支持哪些厂商的请求方式,所支持厂商的模型名称,以及它门在环境变量中的编码

最好是CUSTOM_MODELS变量中@就是指定用哪个厂商的请求方式

不然感觉就是黑箱操作,模型名称展示让人很困惑

比如gemini-1.5-pro@google,和gemini-1.5-pro@openai就是分别以google和opneai的请求格式发送请求,这样也便于接入例如oneapi的中转,因为总是有很多厂商ChatGPT-Next-Web是没有适配的

现在可能看代码才能理清整个逻辑

Alternatives Considered

No response

Additional Context

No response

@manjieqi manjieqi added the enhancement New feature or request label Jul 13, 2024
@Issues-translate-bot
Copy link

Bot detected the issue body's language is not English, translate it automatically.


Title: [Feature Request]: Confusion about model name display

Problem Description

image

CUSTOM_MODELS=-all,+gpt-3.5-turbo,+gpt-4,+gpt-4o,+gemini-1.5-pro,+claude-3-5-sonnet-20240620=claude-3.5-sonnet,+llama3-70b -8192=llama3-70b,+glm-4,+qwen-long,+deepseek-chat

A manufacturer's API corresponds to its model name. Some of its request methods are natively supported by ChatGPT-Next-Web, but the corresponding manufacturers are not shown here, such as gemini-1.5-pro, qwen-long, and others. There are no supported manufacturers' models.

Since not all model names are preset by ChatGPT-Next-Web, or the preset models are not supported by the transfer manufacturer, it is necessary to fill in CUSTOM_MODELS. Use @ in the readme to specify the manufacturer, but it is not made clear here. The specification here is As a demonstration, or use the corresponding request method, and there are currently only two modes?

Solution Description

Provide a support list, including which vendors' request methods are supported, the model names of the supported vendors, and their encodings in environment variables

It is best to use @ in the CUSTOM_MODELS variable to specify which manufacturer's request method to use.

Otherwise, it feels like a black box operation, and the model name display is very confusing.

For example, gemini-1.5-pro@google and gemini-1.5-pro@openai send requests in the request formats of google and opneai respectively. This also facilitates access to transfers such as oneapi, because there are always many manufacturers ChatGPT-Next-Web There is no adaptation

Now you can probably understand the whole logic by looking at the code

Alternatives Considered

No response

Additional Context

No response

@lloydzhou
Copy link
Member

确实,这一次对模型这里结构做了一些改进:

  1. 一方面是为了解决之前Azure和OpenAI的逻辑不清晰的地方。
  2. 另一方面,就是此次改动可以方便接入一些中转api,或者支持厂商发布新模型但是nextchat未更新的时候,能让用户第一时间通过CUSTOM_MODELS用上新模型的能力

最后:也是预留了以后可以通过CUSTOM_PROVIDERS自定义服务商,达到“某个模型以OpenAI的模式发消息,但是可以走一个新的路由/api/{provider_alias},这样可以支持同时存在多个中转供应商”

@Issues-translate-bot
Copy link

Bot detected the issue body's language is not English, translate it automatically.


Indeed, this time some improvements have been made to the structure of the model:

  1. On the one hand, it is to solve the unclear logic of Azure and OpenAI.
  2. On the other hand, this change can facilitate access to some transfer APIs, or allow users to use the new model capabilities through CUSTOM_MODELS when manufacturers release new models but nextchat has not been updated.

Finally: It is also reserved that you can customize the service provider through CUSTOM_PROVIDERS in the future to achieve the goal of "a certain model sends messages in OpenAI mode, but can use a new route/api/{provider_alias}, which can support the existence of multiple transit providers at the same time. business"

@lloydzhou
Copy link
Member

#5001

@Issues-translate-bot

This comment was marked as duplicate.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

3 participants