The visual perception speed of GPT-4o has slowed down. #2697
Replies: 5 comments 1 reply
-
LibreChat prioritizes the current model selected as the "vision" model. if it's not vision-capable, it will use the first vision-capable model available in the model list, by order of the list. Being able to select the vision model outright would improve this current functionality, but this is definitely the case for If you are noticing response degradation, it's likely a result of including images to You can disable this behavior with the following conversation/preset setting. Doing so would make it only send images when you attach them. confirmationTL;DR, if you select |
Beta Was this translation helpful? Give feedback.
-
The logs that actually matter immediately follow Make sure you are updated to the latest commit with commit 638ac5bba61a524cc4ae99711a91f19572c4f2a0 (origin/main, origin/HEAD, main)
Author: Danny Avila <[email protected]>
Date: Mon May 13 14:25:02 2024 -0400
🚀 feat: gpt-4o (#2692)
* 🚀 feat: gpt-4o
* update readme.md
* feat: Add new test case for getMultiplier function
* feat: Refactor getMultiplier function to use valueKey variable Make sure you use I can't reproduce the model switching to |
Beta Was this translation helpful? Give feedback.
-
Thank you again. I have updated it, and it is now back to normal. |
Beta Was this translation helpful? Give feedback.
-
Additionally, I just tried the assistant model, and gpt-4o is currently not supported in assistant V1. |
Beta Was this translation helpful? Give feedback.
-
I updated to GPT-4o today. It processes text very quickly, but when I send images to it, the processing speed reverts to that of GPT-4 Turbo. Moreover, subsequent conversations also become slower. I'm not sure if this is due to the lack of integration in the visual perception aspect, which might be causing this issue.
Beta Was this translation helpful? Give feedback.
All reactions