Issues: ggerganov/llama.cpp
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
llava surgery script for new llava-arch model from Intel
bug-unconfirmed
#7333
opened May 16, 2024 by
KohakuBlueleaf
Support Falcon2-11B
enhancement
New feature or request
#7318
opened May 16, 2024 by
reneleonhardt
4 tasks done
ggml_validate_row_data finding nan value for IQ4_NL
bug-unconfirmed
#7311
opened May 15, 2024 by
bartowski1182
Add support for multilingual Viking models, please.
enhancement
New feature or request
#7309
opened May 15, 2024 by
JohnClaw
Possible performance boost with 2-pass online softmax
bug-unconfirmed
#7306
opened May 15, 2024 by
zixuanweeei
Improve and expand Wikipedia article about llama.cpp
enhancement
New feature or request
#7294
opened May 15, 2024 by
fffelix-jan
4 tasks done
Llama-3 Instruct tokenizer_config.json changes in relation to the currently fetched llama-bpe configs.
enhancement
New feature or request
#7289
opened May 14, 2024 by
Spacellary
4 tasks done
Infinite update_slots issue on latest build (1265c67)
bug-unconfirmed
#7283
opened May 14, 2024 by
Leowolf93
/embeddings endpoint sometimes does not return embedding
bug-unconfirmed
#7277
opened May 14, 2024 by
marcingomulkiewicz
ThunderKittens:a simple yet faster flashattention alternative
enhancement
New feature or request
#7276
opened May 14, 2024 by
sorasoras
EOT token incorrectly set for Mistral-v0.2 trained with added ChatML tokens
bug-unconfirmed
#7271
opened May 14, 2024 by
xzuyn
Does it make sense to optimize strlen in this function with for loops?
bug-unconfirmed
#7268
opened May 13, 2024 by
GermanAizek
Metal (iOS): Compute function exceeds available temporary registers
bug-unconfirmed
#7261
opened May 13, 2024 by
guinmoon
while finetuning llama.cpp doesn't create .bin file...
bug-unconfirmed
#7253
opened May 13, 2024 by
elijiahmiro
llama : save downloaded models to local cache
enhancement
New feature or request
examples
good first issue
Good for newcomers
#7252
opened May 13, 2024 by
ggerganov
Previous Next
ProTip!
Type g i on any issue or pull request to go back to the issue listing page.