You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
it appears that the n_batch in sessions remain the default value of 512. I have had some large inputs of over 800 tokens testing Hermes mistral v0.2 and Dolphin Phi 2 and both had the same error of n_tokens_all <= cparams.n_batch. The fix was to cut until the 500th token.
Setting the sessions params' n_batch did not solve this issue as the default still remains 512 by the llama. Rust's layer does recognize the new value yet the cpp layer keeps the default value.
Taking a look at llamacpp source, there are conflicting default values too, 2048 and 512... so I'm not sure where the issue lies.
The text was updated successfully, but these errors were encountered:
I couldn't really reproduce the issue, but regardless I have updated the llama.cpp in the feat/llava, branch. Could you check if the error persists in that branch?
it appears that the n_batch in sessions remain the default value of 512. I have had some large inputs of over 800 tokens testing Hermes mistral v0.2 and Dolphin Phi 2 and both had the same error of
n_tokens_all <= cparams.n_batch
. The fix was to cut until the 500th token.Setting the sessions params' n_batch did not solve this issue as the default still remains 512 by the llama. Rust's layer does recognize the new value yet the cpp layer keeps the default value.
Taking a look at llamacpp source, there are conflicting default values too, 2048 and 512... so I'm not sure where the issue lies.
The text was updated successfully, but these errors were encountered: