-
Notifications
You must be signed in to change notification settings - Fork 1k
Issues: abetlen/llama-cpp-python
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
pip install llama-cpp-python got stuck forever at "Configuring CMake" in docker
#1908
opened Jan 24, 2025 by
jiafatom
openai API
max_completion_tokens
argument is ignored
#1907
opened Jan 24, 2025 by
BenjaminMarechalEVITECH
4 tasks done
Add minicpm-o and qwen2-vl to the list of supported multimodal models.
#1904
opened Jan 24, 2025 by
kseyhan
OSError: exception: access violation reading 0x0000000000000000
#1903
opened Jan 24, 2025 by
andretisch
DeepSeek-R1-Distill-Qwen-32B-GGUF needs the deepseek-r1-qwen tokenizer
#1900
opened Jan 20, 2025 by
Kenshiro-28
Using a pre-built wheel currently requires specifying the right version, e.g. llama-cpp-python==0.3.4
#1872
opened Dec 19, 2024 by
eeegnu
chatml-function-callling not adding tool description to the prompt.
#1869
opened Dec 16, 2024 by
undo76
2
Previous Next
ProTip!
Exclude everything labeled
bug
with -label:bug.