-
Notifications
You must be signed in to change notification settings - Fork 385
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug] Error deploying HuggingFace model llava-v1.6-mistral-7b with lmdeploy: Unrecognized model type llava_mistral #1573
Comments
Currently, only turbomind backend support vision language model, and turbomind backend doesn't support moe architecture, so you can't deploy mistral model. |
Sorry, we are not using moe architecture mistral,we use llava-v1.6-mistral-7b in hugging face link: |
when i use pytorch backend the same error occur blew: lmdeploy serve api_server /data/kai.qiao/model_repo/llava/liuhaotian/llava-v1.6-mistral-7b --server-port 3333 --backend pytorch --model-name mistral ValueError: The checkpoint you are trying to load has model type |
Currently, we only support llava llama. To support llava-mistral, the We will check and support the model later if it doesn't use the moe or windows attention that turbomind backend doesn't support currently. |
Thank you very much for your patience |
#1579 works for llava_mistral now. May give it a try. |
i compile src and execute always occur error ,like blew python mistra_7b.py During handling of the above exception, another exception occurred: Traceback (most recent call last): |
|
after install the error the same |
This issue is marked as stale because it has been marked as invalid or awaiting response for 7 days without any further response. It will be closed in 5 days if the stale label is not removed or if there is no further response. |
This issue is closed because it has been stale for 5 days. Please open a new issue if you have similar issues or you have any new updates now. |
Checklist
Describe the bug
When attempting to deploy the llava-v1.6-mistral-7b model from HuggingFace using lmdeploy, I encountered an error indicating that the model type llava_mistral is not recognized by the Transformers library. This issue occurred despite the lmdeploy documentation stating that any model in HuggingFace format should be supported for inference. The same process worked previously for a model of type vicuna, but is failing for mistral.Local inference with the model works correctly.
Reproduction
Attempt to deploy the llava-v1.6-mistral-7b model using lmdeploy.
Observe the error message in the logs.
Environment
Error traceback
The text was updated successfully, but these errors were encountered: