Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

n_tokens_all <= cparams.n_batch #77

Open
ElhamAryanpur opened this issue Apr 23, 2024 · 2 comments
Open

n_tokens_all <= cparams.n_batch #77

ElhamAryanpur opened this issue Apr 23, 2024 · 2 comments

Comments

@ElhamAryanpur
Copy link

it appears that the n_batch in sessions remain the default value of 512. I have had some large inputs of over 800 tokens testing Hermes mistral v0.2 and Dolphin Phi 2 and both had the same error of n_tokens_all <= cparams.n_batch. The fix was to cut until the 500th token.

Setting the sessions params' n_batch did not solve this issue as the default still remains 512 by the llama. Rust's layer does recognize the new value yet the cpp layer keeps the default value.

Taking a look at llamacpp source, there are conflicting default values too, 2048 and 512... so I'm not sure where the issue lies.

@pedro-devv
Copy link
Contributor

I couldn't really reproduce the issue, but regardless I have updated the llama.cpp in the feat/llava, branch. Could you check if the error persists in that branch?

@MatthewCash
Copy link

I just ran into this, you need to set both n_batch and n_ctx, because the batch size is set to the minimum of those two options.

let batch = min(session_params.n_ctx, session_params.n_batch) as usize;

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants