Replies: 6 comments 10 replies
-
Llama3 is not multi-modal, it does not support images as input. |
Beta Was this translation helpful? Give feedback.
-
You can use a Self Hosted deployment of LiteLLM now, update the OpenAI config including BASE URL to point to your LiteLLM deployment. |
Beta Was this translation helpful? Give feedback.
-
After further testing, its not so much a crash but more a freeze as docker doesn't restart the conatiner even if in "always-restart" mode, so we now have a docker healthcheck in place with docker autoheal to restart the container to reduce downtime. |
Beta Was this translation helpful? Give feedback.
-
After upgrading to openWebui v.0.2.4, if we try to upload an image while using the same model (Llama 3) via vLMM (not liteLLM, did not try an external liteLLM instance yet), openWebUI doesn't freeze anymore. |
Beta Was this translation helpful? Give feedback.
-
While true, it's inconsistent behaviour with other platforms so can be confusing.
On one hand it's great to continue a conversation with another model, but 99% of the time when I change models, it will be a new context, so carrying across the context just creates the potential for bugs like this. If changing models results in an error (no longer freeze) for every subsequent message in the conversation, then surely that is a bug.
…________________________________
From: Justin Hayes ***@***.***>
Sent: Thursday, 6 June 2024 8:18 AM
To: open-webui/open-webui ***@***.***>
Cc: spammenotinoz ***@***.***>; Comment ***@***.***>
Subject: Re: [open-webui/open-webui] Crash when uploading an image with model Llama 3 70B (Discussion #2498)
considering llama3 does not in fact support images... seems to be working as intended now. Previous behaviour where it would crash after trying anyway was a bug.
—
Reply to this email directly, view it on GitHub<#2498 (reply in thread)>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/AG7KS4AWHAEQVBMBDNJRIULZF6FE5AVCNFSM6AAAAABIEA55Z6VHI2DSMVQWIX3LMV43SRDJONRXK43TNFXW4Q3PNVWWK3TUHM4TMOBSHEZTE>.
You are receiving this because you commented.Message ID: ***@***.***>
|
Beta Was this translation helpful? Give feedback.
-
Curious do you actually switch models and continue the conversation with a new model? perhaps I should try this more, still quite new to testing this project. |
Beta Was this translation helpful? Give feedback.
-
Bug Report
Description
Bug Summary:
We use model Llama 3 70B via litellm and it works well, but as soon as we upload any image file (png or jpg) the webui becomes unresponsive and has to be restarted, no error found in logs. PDFs do not crash webui, only images.
Normally with models that do not support image uploads like GPT3.5-turbo, we get the following error message:
"Uh-oh! There was an issue connecting to gpt-3.5-turbo.
External: Invalid content type. image_url is only supported by certain models."
But it seems like with Llama 3 70B via litellm this file type check is not done correctly.
Steps to Reproduce:
use model Llama 3 70B via litellm
open webui, set model to Llama 3 70B
test model with text prompt, notice it works
test model with image upload and text prompt, notice it becomes unresponsive.
Expected Behavior:
[Describe what you expected to happen.]
Actual Behavior:
[Describe what actually happened.]
Environment
Open WebUI Version: v0.1.125
Operating System: [Ubuntu 20.04]
Reproduction Details
Confirmation:
Logs and Screenshots
Browser Console Logs:
[Include relevant browser console logs, if applicable]
Docker Container Logs:
[Include relevant Docker container logs, if applicable]
Screenshots (if applicable):
[Attach any relevant screenshots to help illustrate the issue]
Installation Method
Docker-compose file
Additional Information
[Include any additional details that may help in understanding and reproducing the issue. This could include specific configurations, error messages, or anything else relevant to the bug.]
Note
If the bug report is incomplete or does not follow the provided instructions, it may not be addressed. Please ensure that you have followed the steps outlined in the README.md and troubleshooting.md documents, and provide all necessary information for us to reproduce and address the issue. Thank you!
Beta Was this translation helpful? Give feedback.
All reactions