Replies: 5 comments 1 reply
-
I am also encountering this issue with Docker (Open WebUI 0.1.118 and Ollama 0.1.31). Have not investigated much, but the problem might stem from a null Container logs:
|
Beta Was this translation helpful? Give feedback.
-
It seems to be working fine for I've completely ditched Ollama and just moved over to the |
Beta Was this translation helpful? Give feedback.
-
One possible (low-priority) improvement would be to set the These are quite useful to see how close you are to using up the context and better pre-empt the need to summarise your story so far, etc. |
Beta Was this translation helpful? Give feedback.
-
I am still experiencing the issue on ollama. |
Beta Was this translation helpful? Give feedback.
-
Sorry for the delay , i have just try the stop button and now it's okay , the stop button stop all thread "generating ollama response" and the cpu are not used. Thanks a lot for this patch ! |
Beta Was this translation helpful? Give feedback.
-
Bug Report
When i click on stop button on the web-ui the generation ollama backend don't stop work , the cpu / gpu is always solicited
Description
Bug Summary:
[Provide a brief but clear summary of the bug]
Steps to Reproduce:
create one request on prompt when the first word appeared click on the stop button . the text stop to appeared but the cpu continu to work for generate the response
Environment
model mistral is used with your helm deploy
Reproduction Details
Confirmation:
Installation Method
deploy with helm charts describe in your doc
Beta Was this translation helpful? Give feedback.
All reactions