You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
On adding llama_cpp-rs to my Cargo.toml, llama.cpp seems to be locked to an older version. I'm trying to use Phi-3 128k in a project and I'm unable to because the PR that was merged into llama.cpp about two weeks ago.
Is there an easy way to make sure that llama_cpp-rs is using the latest llama.cpp commit?
The text was updated successfully, but these errors were encountered:
In the meantime, if you want to try it out, just put this in the Cargo.toml: llama_cpp = { git = "https://github.com/vargad/llama_cpp-rs.git", branch = "bump_3038" }
I only tested Phi3 4K model, that works with the change above.
I tried this, and it made it possible to load Pythia models, though still not Gemma models (perhaps Gemma support in llama.cpp is more recent). Thanks!
On adding llama_cpp-rs to my Cargo.toml, llama.cpp seems to be locked to an older version. I'm trying to use Phi-3 128k in a project and I'm unable to because the PR that was merged into llama.cpp about two weeks ago.
Is there an easy way to make sure that llama_cpp-rs is using the latest llama.cpp commit?
The text was updated successfully, but these errors were encountered: