-
Notifications
You must be signed in to change notification settings - Fork 59.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Feature Request]: 关于模型名称展示的困惑 #4997
Comments
Title: [Feature Request]: Confusion about model name display Problem DescriptionCUSTOM_MODELS=-all,+gpt-3.5-turbo,+gpt-4,+gpt-4o,+gemini-1.5-pro,+claude-3-5-sonnet-20240620=claude-3.5-sonnet,+llama3-70b -8192=llama3-70b,+glm-4,+qwen-long,+deepseek-chat A manufacturer's API corresponds to its model name. Some of its request methods are natively supported by ChatGPT-Next-Web, but the corresponding manufacturers are not shown here, such as gemini-1.5-pro, qwen-long, and others. There are no supported manufacturers' models. Since not all model names are preset by ChatGPT-Next-Web, or the preset models are not supported by the transfer manufacturer, it is necessary to fill in CUSTOM_MODELS. Use @ in the readme to specify the manufacturer, but it is not made clear here. The specification here is As a demonstration, or use the corresponding request method, and there are currently only two modes? Solution DescriptionProvide a support list, including which vendors' request methods are supported, the model names of the supported vendors, and their encodings in environment variables It is best to use @ in the CUSTOM_MODELS variable to specify which manufacturer's request method to use. Otherwise, it feels like a black box operation, and the model name display is very confusing. For example, gemini-1.5-pro@google and gemini-1.5-pro@openai send requests in the request formats of google and opneai respectively. This also facilitates access to transfers such as oneapi, because there are always many manufacturers ChatGPT-Next-Web There is no adaptation Now you can probably understand the whole logic by looking at the code Alternatives ConsideredNo response Additional ContextNo response |
确实,这一次对模型这里结构做了一些改进:
最后:也是预留了以后可以通过CUSTOM_PROVIDERS自定义服务商,达到“某个模型以OpenAI的模式发消息,但是可以走一个新的路由/api/{provider_alias},这样可以支持同时存在多个中转供应商” |
Indeed, this time some improvements have been made to the structure of the model:
Finally: It is also reserved that you can customize the service provider through CUSTOM_PROVIDERS in the future to achieve the goal of "a certain model sends messages in OpenAI mode, but can use a new route/api/{provider_alias}, which can support the existence of multiple transit providers at the same time. business" |
Problem Description
CUSTOM_MODELS=-all,+gpt-3.5-turbo,+gpt-4,+gpt-4o,+gemini-1.5-pro,+claude-3-5-sonnet-20240620=claude-3.5-sonnet,+llama3-70b-8192=llama3-70b,+glm-4,+qwen-long,+deepseek-chat
一种厂商的API对应着它的模型名称,它的请求方式有些是被ChatGPT-Next-Web原生支持的,但是这里又没有显示对应的厂商,比如gemini-1.5-pro,qwen-long,另外还有没有被支持的厂商的模型。
由于不是所有模型名称都被ChatGPT-Next-Web预置,或者预置的模型不被中转厂商支持,就有必要填写CUSTOM_MODELS,readme里面使用@可以指定厂商,但这里并没有说清楚,这里指定是作为展示,还是使用对应的请求方式,并且目前只有两个模式?
Solution Description
提供一个支持列表,包括支持哪些厂商的请求方式,所支持厂商的模型名称,以及它门在环境变量中的编码
最好是CUSTOM_MODELS变量中@就是指定用哪个厂商的请求方式
不然感觉就是黑箱操作,模型名称展示让人很困惑
比如gemini-1.5-pro@google,和gemini-1.5-pro@openai就是分别以google和opneai的请求格式发送请求,这样也便于接入例如oneapi的中转,因为总是有很多厂商ChatGPT-Next-Web是没有适配的
现在可能看代码才能理清整个逻辑
Alternatives Considered
No response
Additional Context
No response
The text was updated successfully, but these errors were encountered: