-
Notifications
You must be signed in to change notification settings - Fork 59k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Feature Request]: support for gemini-1.5-flash-latest #4701
Comments
Title: [Feature Request]: support for gemini-1.5-flash-latest |
gemini-1.5-flash currently does not support streaming response, so it is necessary to turn off the streaming response button. |
merged in 0bf758a |
@daiaji streaming mode works fine on my end, would you mind share more specific issue when you tried gemini flash? |
flash model should be added into "isVisionModel" list |
@gochendong nice catch! working on it |
The v1 API does not support streaming response, while the v1beta API supports streaming response. I think ChatGPTNextWeb should add a switch to turn off streaming response in the web interface to handle APIs that do not support streaming response. |
Problem Description
new Google model
Solution Description
support image input
Alternatives Considered
No response
Additional Context
No response
The text was updated successfully, but these errors were encountered: