-
Notifications
You must be signed in to change notification settings - Fork 385
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Feature]- Support for the microsoft/Phi-3-vision-128k-instruct Vision Model #1637
Comments
@sabarish244 Hi, thanks for your info. We include this model in the TODO list. |
A first implementation on the vLLM project was made of Phi3-Vision. Maybe this can help. |
Merged
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Motivation
The latest release of microsoft phi3 4.2b 128k context vision model looks promising in performance and resource saving one too as it boast just 4.2b parameter. So it would be a great feature if the lmdeploy inference server supports it
Related resources
https://huggingface.co/microsoft/Phi-3-vision-128k-instruct/tree/main
https://azure.microsoft.com/en-us/blog/new-models-added-to-the-phi-3-family-available-on-microsoft-azure/
Additional context
I tried running the model via the lmdeploy docker inference server, installed the required additional packages and ran the model, model loaded and running but while trying to inference it via the api's we are getting either empty response or internal server error
The text was updated successfully, but these errors were encountered: