Issues: ollama/ollama
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
Ollama v0.1.34 Timeout issue on Codellama34B
bug
Something isn't working
#4283
opened May 9, 2024 by
humza-sami
ollama can support Huawei Ascend NPU?
feature request
New feature or request
#4282
opened May 9, 2024 by
lonngxiang
Error: pull model manifest: file does not exist
bug
Something isn't working
#4280
opened May 9, 2024 by
taozhiyuai
Ollama reports an error when running the AI model using GPU
bug
Something isn't working
#4279
opened May 9, 2024 by
xiaomo0925
Unexpected Increase in Inference Time as Context Window Grows on Llama3:7b
bug
Something isn't working
#4277
opened May 9, 2024 by
gusanmaz
Degraded accuracy when using the nomic-embed-text (v1.5) model with Ollama versions 0.1.32 and 0.1.33
bug
Something isn't working
#4275
opened May 9, 2024 by
Ganesh1030
Update command for Linux version
feature request
New feature or request
#4274
opened May 9, 2024 by
Maplerxyz
windows ollama 0.1.34 can not use GPU,with nvidia RTX 4060
bug
Something isn't working
#4270
opened May 9, 2024 by
zhafree
ollama_llama_server is still running after exiting via SIGINT (client is llama_index in python)
bug
Something isn't working
#4267
opened May 8, 2024 by
RobbyCBennett
Unable to bind of the private ec2 instance ip in ollama service file to restrict the access
bug
Something isn't working
#4263
opened May 8, 2024 by
devivaraprasad901
stop loading model while i close my computer.
bug
Something isn't working
#4259
opened May 8, 2024 by
chaserstrong
please clean up those useless models uploaded by users
feature request
New feature or request
#4258
opened May 8, 2024 by
taozhiyuai
max retries exceeded: http status 502 Bad Gateway while pushing a model
bug
Something isn't working
#4255
opened May 8, 2024 by
taozhiyuai
The ollama model how resides on the gpu?
feature request
New feature or request
needs more info
More information is needed to assist
#4254
opened May 8, 2024 by
lonngxiang
A repeatable hang issue on Linux - dual radeon
amd
Issues relating to AMD GPUs and ROCm
bug
Something isn't working
#4253
opened May 8, 2024 by
eliranwong
Ollama using minimal GPU on Windows
needs more info
More information is needed to assist
#4251
opened May 8, 2024 by
Freffles
The model does not output correctly in Ollama, but it works fine in LM Studio.
bug
Something isn't working
#4249
opened May 8, 2024 by
vawterdada
Previous Next
ProTip!
Type g p on any issue or pull request to go back to the pull request listing page.