Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature]- Support for the microsoft/Phi-3-vision-128k-instruct Vision Model #1637

Open
sabarish244 opened this issue May 22, 2024 · 3 comments
Assignees

Comments

@sabarish244
Copy link

Motivation

The latest release of microsoft phi3 4.2b 128k context vision model looks promising in performance and resource saving one too as it boast just 4.2b parameter. So it would be a great feature if the lmdeploy inference server supports it

Related resources

https://huggingface.co/microsoft/Phi-3-vision-128k-instruct/tree/main
https://azure.microsoft.com/en-us/blog/new-models-added-to-the-phi-3-family-available-on-microsoft-azure/

Additional context

I tried running the model via the lmdeploy docker inference server, installed the required additional packages and ran the model, model loaded and running but while trying to inference it via the api's we are getting either empty response or internal server error

@lvhan028 lvhan028 assigned grimoire and RunningLeon and unassigned grimoire May 22, 2024
@RunningLeon
Copy link
Collaborator

@sabarish244 Hi, thanks for your info. We include this model in the TODO list.

@Youho99
Copy link

Youho99 commented Jun 19, 2024

A first implementation on the vLLM project was made of Phi3-Vision.

Maybe this can help.

vllm-project/vllm#4986

@RunningLeon
Copy link
Collaborator

@Youho99 hi, welcome to try this PR #1845 .

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants