Skip to content

Issues: ggerganov/llama.cpp

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Author
Filter by author
Label
Filter by label
Use alt + click/return to exclude labels
or + click/return for logical OR
Projects
Filter by project
Milestones
Filter by milestone
Assignee
Filter by who’s assigned
Sort

Issues list

Segmentation Fault on GPU
#7337 opened May 17, 2024 by djain-fujitsu
Issues: Unable for multiuser prompt
#7336 opened May 17, 2024 by OlivesHere
Support Falcon2-11B enhancement New feature or request
#7318 opened May 16, 2024 by reneleonhardt
4 tasks done
Add support for multilingual Viking models, please. enhancement New feature or request
#7309 opened May 15, 2024 by JohnClaw
Improve and expand Wikipedia article about llama.cpp enhancement New feature or request
#7294 opened May 15, 2024 by fffelix-jan
4 tasks done
enable rpc for server enhancement New feature or request
#7292 opened May 15, 2024 by steampunque
llama : save downloaded models to local cache enhancement New feature or request examples good first issue Good for newcomers
#7252 opened May 13, 2024 by ggerganov
ProTip! Type g i on any issue or pull request to go back to the issue listing page.